Skip to main content

tv   Part 1 Facebook Google and Twitter CE Os Testify on Combating Online...  CSPAN  March 26, 2021 3:01am-6:02am EDT

3:01 am
3:02 am
and facebook. >> today's hearing is being held remotely. all members and witnesses will be participating via video conferencing. as part of our hearing microphones will be set on mute for the purpose of eliminating inadvertent background noise. witnesses and speakers will need to unmute their microphones to speak. due to the anticipated length of this hearing the committee will take a 15-minute recess around 3:00 to provide witnesses a bathroom break. all documents will be entered into the record at the conclusion of the hearing. the chair will now recognize himself for five minutes. our nation is drowning in
3:03 am
disinformation driven by social media. platforms that were once used to share photos of kids with grandparents are all too often havens of hate, harassment and division. the way i see it, there are two faces to each of your platforms, facebook has family and friends neighborhood. but it is right next to the one where there's a white tphags nationalists rally every day. covid deniers, and qanon supporters are sharing videos. twitter allows you to bring celebrities in your home, and also holocaust deniers, terrorist and worse. almost everything is scripted on social media platforms. facebook recognizes anti-social
3:04 am
tendencies in one user and invites them to visit the white nationalists. youtube sees another user as interested in covid-19, and auto starts an anti-vaccine video. on twitter a user following the trending conversation never knowing it's coordinated by disinformation networks run by foreign agents. your platforms change how people across the planet, communicate, connect, learn and stay informed. the power of this technology is awesome and terrifying, and each of you had failed to protect your users and the world from the worse consequence of your creations. this is the first time the three of you have appeared before congress since the deadly attack on the capitol on january 6th. that event was not just an attack on our democracy and electoral process but an attack on every member of this committee and in the congress. many of us were on the house floor and in the capitol when
3:05 am
that attack occurred and we were forced to stop our work of certifying the election and retreat to safety, some of us wearing gas masks and fearing four our lives. we fled as the mob fled on our democratic process. people died that day and was seriously injured. your platforms suggested groups for people to join, videos they should view and posts they should like. driving this movement forward with terrifying speed and efficiency. fbi documents show that many of these individuals used your platforms to plan, recruit and execute this attack. according to independent research, users on facebook were exposed 1.1 billion times to misinformation related to information to the election last
3:06 am
year alone, despite your claims that you removed that disinformation. nearly 550,000 americans have lost their lives to the deadly disease during this pandemic. more than any other country on the planet. and an independent study found on facebook alone that users across five countries, including the united states were exposed to covid disinformation an estimated 8.3 billion times, again, despite claims of fixes and reforms and now as the biden administration is working to implement the american rescue plan and get vaccines in peoples' arms we are faced with waves of disinformation on social media about the safety and efficacy of these shots. these vaccines are the best chance we have to fight this virus and the content that your websites are still promoting and still recommending and still share something one of the biggest reasons people are refusing the vaccine. things haven't changed.
3:07 am
my staff found content on youtube, telling people not to get vaccines and was recommended to similar videos. the same was true on instagram, where it was not only easy to find vaccine disinformation, but platforms recommended similar voice, and the same thing happened on facebook, and twitter was no different. if you go to any of these super spreader accounts that remain up despite the policies meant to curve this anti-vac content, you will see the content. understand this. you can take the content down and reduce the vision and can fix it but you choose not to. we saw your platforms remove isis terrorists content, and we saw you tamp town on the covid misinformation, and you have
3:08 am
removed serial misinformation super spreaders, and time after time you are picking engagement safety of your users. it seems like you shrug off billion-dollar fines. we need rules, regulations, technical experts in government and audit authority of your technologies. ours is the committee of jurisdiction, and we will legislate to stop this. the stakes are simply to high. the chair will now recognize mr. lada, ranking member on technology. >> thank you for recognizing, and i want to thank our witnesses for being with us today for a conversation that is long overdue in the energy and commerce committee.
3:09 am
your companies have a power to silence the president of the united states, and shut off legitimate skwraourpbism in australia, and shut off debate on a variety of issues, and when these actions are taken you just have little to no recourse to appeal the decision, and if they are aware of your actions. most cases we simply don't know. what does this mean for every day americans? we are all aware of big tech's ever increasing censorship of voices and their commitment to influence a generation of children who are canceling any news, books, and now toys that are not considered woke. this is fundamentally un-american. at a recent hearing on
3:10 am
disinformation and extremism online, one of the nation's most foremost experts on constitutional law testified about the little brother problem, and the problem in which private entities do for the government things it cannot do for itself. google has a greater than 92 market share in search. facebook has over 2.7 billion monthly users. your companies have enormous control over whose ideas are seen, read, or heard around the world. this gives you great power, and if misused, as we have seen in recent years, your actions have a ripple affect throughout the world, the resolve in american voices being used for ideas. other serious harms are occurring on these platforms that affect ordinary americans. young american children and teenagers are addicted --
3:11 am
actually addicted to sraeur their devices and social media. it will only get worse as children continue to be separated from their peers and teachers in the classroom. your platforms are purposely designed to keep our children hooked to their screens. they use the social media to link increased depression and suicide among america's youth. illegal drugs continue to be sold online. mr. chairman, i do ask for a chance to submit a letter to the record. >> no objections. >> thank you very much. serious problems continue to persist and i wonder how much you are truly dedicated to combating these actions. what actions are you taking about alerting americans of the dangers of using your site.
3:12 am
we have over site of any change made to section 230 on the communications decency act. section 230 provides you with liability protection for content moderation decisions made in good faith. based on recent actions, however, it's clear that your definition of good faith, moderation, includes sensoring viewpoints you disagree with and establishing full independent appeal process that doesn't make its content moderating decision based on american principles of free speech, and i find that highly concerning. with that, mr. chairman, i yield back the balance of my time. >> the chair now recognizes chair of the sub committee on consumer protection commerce for
3:13 am
five minutes for her opening statement. >> thank you, it's a pleasure to cochair this meeting with you. i want to welcome our witnesses and thank them for coming. it is not an exaggeration to say that your companies have fundamentally and permanently transformed our very culture, and our understanding of the world. much of this is for good, but it is also true that our country, our democracy and even our understanding of what is truth has been harmed by the proliferation and dissemination of misinformation and extremism all of which has deeply divided us. what our witnesses today need to take away from this hearing is that self regulation has come to the end of its road and that this democracy, this
3:14 am
democratic -- the people that you see before you, elected by the people is preparing to move forthwith legislation and regulation. the regulation that we seek should not attempt to limit constitutionally protected freedom of speech but it must hold platforms accountable when they are used to incite violence and hatred, or as in the case of the covid pandemic, spread misinformation that costs thousands of lives. all three of the companies that are here today run platforms that are hotbeds of misinformation and disinformation. despite all the promises and new policies to match,
3:15 am
disinformation was -- disinformation was rampant in the 2020 election, especially targeting vulnerable communities. for example, spanish language ads run by the trump campaign falsely accused biden being endorsed by venezuelan president . disinformation fed upon itself until it arrived at the capitol of the united states on january 6th, which costs five lives. the lives lost in the insurgency were not the first cases of the platforms' failure or the worse. in 2018 facebook admitted a
3:16 am
genocide in miramar was planned and executed on facebook.yramar and executed on facebook. they called it the plan-demic. this film got 150,000 shares before it was removed. disinformation like plan-demic made people skeptical for the need of vaccines and almost certainly costs -- contributed to the horrible loss of life during the pandemic. disinformation also hops
3:17 am
platforms -- i'm looking for a time -- to -- disinformation also hopes -- hops from platform to platform, the plan-demic was first on youtube before it was on facebook and instagram and twitter. misinformation regarding the election dropped 73% across social media platforms after twitter permanently suspended trump as well as -- i'm sorry. and also the capitol insurgency and qanon. but the question is what took so long? the witnesses here today have demonstrated time and time again
3:18 am
that they -- that they do not -- that self regulation has not worked. they must be held accountable for allowing disinformation and misinformation to spread, and that is why i will be introducing the online consumer protection act which i hope will earn bipartisan support. thank you. i yield back. >> the gentle lady yields back. and the chair now recognizes republican gus. my time has provided me enough
3:19 am
knowledge about the history of the committee to know what the telecommunications act was and informed me what it wasn't. components of that law have been struck down by the courts while other provisions are interpreted and applied differently than first conceived. this is all a departure from congressional intent. regardless of what one thinks and whether the decency act was the right approach, the same members that voted for section 230 voted for the entire bill. the statute was meant to protect society, specifically our children. for our witnesses today, here's the problem before you, you don't want the government telling you what parts of your company you are allowed to operate, so imagine things from our perspective when you pick and choose what parts of the law you want to follow.
3:20 am
i really do admire your ingenuity. you have created something truly remarkable in my opinion, but with that power you must also be good samaritans and you have an obligation to be stewards of your platform. if your legal department doesn't believe you are bound to the intent of the law, i would hope your moral compasses will. many of my colleagues will talk about concerns about the attack on the capitol in january, and what happened in our cities last summer. these were all incidents where social media escalated tension and incited chaos through echo chambers and algorithms. as a new republican leader, quite an honor, on the commerce committee, so the consumer protection and commerce committee, i have been digging
3:21 am
into how your companies operate. that led me to run a survey of my district upon hearing this announcement, and my conclusion is constituents simply don't trust you anymore. with thousands of responses, over 82% say they do not trust big tech to be good stewards of their platforms or consistently enforce their policies. that includes my constituency that told me we were providing information to local families on teen suicide risks on facebook live stream. it was blocked by facebook. another constituent said, she has seen countless teens be bullied online and for not being able to deal with the comparison game, and others told me they
3:22 am
stopped using your services altogether out of fear and distrust. one even told me they quit social media due to treatment from your companies over their family's christian views. each one of these represents a story of how your companies have failed people, and you will be hearing from my colleagues with more of these stories about how big tech has lost its way, highlighting a much larger problem. people want to use your services but they suspect your coders are designing what they think we should see and hear by keeping us online longer than ever and all with the purpose to polarize or monetize us, disregarding any consequences for the assault on our inherent freedoms, so i don't want to hear about how changing your current law is
3:23 am
going to hurt startups, because i heard directly from them accusing you of anti-competitive tactics, and none of us want to damage entrepreneurs. what i do want to hear is what you will do to bring our country back from the fringes and stop the poison, and cooperate with law enforcement to protect our citizens. our kids are being lost while you say you will try to do better, as we have heard countless times already. we need true transparency and real change. we need, again, not empty promises from you, and we have heard that over and over again. the fear you should have come into this hearing today isn't that you are going to get
3:24 am
upgraded by a member of congress, it's that our committee knows how to get things done when we come together. we can do this with you or without you, and we will. thank you, mr. chairman, i yield back. >> the chair now recognizes mr. paw loan, chairman of the full committee for five minutes for his opening statement. >> thank you chairman doyle for this hearing. we are here because the spread of disinformation has been growing online in particular with social media with no guardrails to stop it, and it doesn't just stay online it has real-world and often dangerous and even violent consequences and the time has come to hold online platforms accountable for their part in the rise of disinformation and extremism. according to a survey conducted earlier this month 30% of americans are hesitant or simply
3:25 am
do not want to take the covid-19 vaccine. this month homeland security secretary identified domestic violent extremism as the greatest threat to the united states in crimes against asian-americans have risen by nearly 150% since the beginning of the covid-19 pandemic. each of these controversies and crimes have been accelerated and amplified on social media platforms through misinformation, campaigns, the spread of hate speech and the proliferation of conspiracy theories. facebook, google and twitter were warned about but simply ignored their platforms rolling and spreading disinformation. since then the problem has only gotten worse. only after public outrage and pressure did these companies make inadequate attempt to appease lawmakers, but despite the public rebuke, wall street
3:26 am
continues to promote misinformation by driving their stock prices higher. despite repeated promises to tackle this crisis, you routinely make minor changes to the policies in response to the crisis of the day and they will change underlying internal policy that may or may not be related to the problem, but that's it, the underlying problem remains. mr. chairman, it's now painfully clear that neither the market nor public pressure will force these social media companies to take the aggressive action they need to take to eliminate this information and extremism from their platforms and therefore it's time for congress and this committee to legislate and realign the senate, and today they have a blank check to do nothing rather than limit the spread of misinformation for divisive content to get americans hooked on their
3:27 am
platforms at the expense of the public interests, and it's not just that social media companies are allowing disinformation to spread but in many cases they are amplifying and spreading it themselves and fines have become the cost of doing business. the dirty truth is that they are relying on algorithms to purposely promote divisive or extremist content so they can take more money in ad dollars and that's because the more extreme content the more views equals more money, mr. chairman, that's what it's all about, more money. it's crucial to understand these companies are not just mere bystanders but are playing a active role in the disinformation extremism because they make money on it. so when a company is promoting this harmful content, i question whether a protection should apply. members on this committee suggested legislative solutions
3:28 am
and the committee will consider all of these options so we can finally align the interests of the companies with the interest of the public and hold the platforms and their ceos accountable when they stray. that's why you are here today, mr. zuckerberg, mr. pichai and mr. dorsey, your platforms have prevented the spread of the virus and trampling on civil liberties and some bad actors will shout fire in a crowded theater, and your platforms are handing them a megaphone, and the time for self regulation is over. it's time we legislate to hold you accountable and that's what we are going to do. i want to thank you all because i know you are serious about moving forward on legislation. which we will do, i promise, everybody.
3:29 am
>> the chair now recognizes mrs. rogers for five minutes for her opening statement. >> thank you, mr. chairman. ten years ago when i joined big tech platforms i thought they would be a force for good. i thought that they would help us build relationships and promote transparency in congress. i can testify today i was wrong. that is not what has transpired. you have broken my trust. yes, because you failed to promote the battle of ideas and free speech. yes, because you sensor political viewpoints you disagree with. those polarizing actions matter for democracy. but do you know what convinced me big tech is a destructive force? it's how you use your power to
3:30 am
manipulate and harm children. i am a mom of three school-aged kids and my husband and i are fighting the big tech battles in our household every day, and it's a battle for their development, a battle for their mental health and ultimately a battle for their safety. i have monitored your algorithms. i have monitored where your al go rhythms lead them. it's frightening, and i know i am not alone. after multiple teenage suicides in my community i reached out to our schools and we started asking questions, what is going on with our kids? what is making them feel so alone and empty and the despair? this is what i heard over and over, they are all raising the alarm about social media, a day doesn't go by that i don't talk to friends and other parents that tell me their 14-year-old is depressed, she used to love
3:31 am
soccer and now they can't get her to do anything, and she never gets off her device or leaves her room. i think about a mom that told me she can't leave her daughter alone ever because she harms herself. for the family who is recovering after almost losing their daughter to a predator she met online, these stories are not unique to me or eastern washington. i recently learned of a college student that lost nine friends to suicide. this is unimaginable. the science on social media is becoming clearer. between 2011 and 2018, rates of depression, self harm, suicides and suicide attempts exploded among american teens. during that time rates of teen depression increased more than 60% with a larger increase among
3:32 am
young girls. between 2009 and 2015, emergency room admissions for self harm among 14-year-olds tripled. and suicide, substantially increased. one study found during that time teens that used their devices for five or more hours a day were a 66% more likely to have at least one suicide related outcome compared to those that use their device for just one. and others report lower psychological well-being and feelings of loneliness. remember our kids, the users, are the product. you, big tech, are not advocates for children. you exploit and profit off them. big tech needs to be exposed and completely transparent for what you are doing to our children so parents like me can make
3:33 am
informed decisions, and we expect big tech to do more to protect children because you have not done enough. big tech has failed to be good stewards of your platforms. i have two daughters and a son with a disability. let me be clear. i do not want you defining what is true for them. i do not want their future manipulated by your algorithms. i do not want their self worth defined by the engagement tools you built to attract their attention. i do not want them to be in danger from what you created. i do not want their emotions and vulnerabilities taken advantage of so you can make more money and have more power. i am sure most of my colleagues on this committee who are parents and grandparents feel the same way. over 20 years ago before we knew what big tech would become,
3:34 am
congress gave you big liability protections. i want to know why do you think you still deserve those protections today? what will it take for your business model to stop harming children? i know i speak for millions of moms when i say we need answers and we will not rest until we get them. thank you. >> thank you. the gentle lady yields back. the chair would like to remind members, all written statements should be made part of the record. i want to introduce our witnesses for today's hearing and thank them all for appearing today. first, we have mark zuckerberg, chief executive officer of facebook, and sundar pichai of google, and jack dorsey of
3:35 am
twitter. mr. zuckerberg, we will start with you. you are recognized for five minutes. >> ranking members and members of the committee, i am glad this committee is looking at all the ways that misinformation and disinformation show up in our country's discharge. there are important challenges for society and we need to decide how to handle speech that is legal but harmful and who should be responsible for what people say. misinformation was not a new problem. it was 200 years ago that a congressman said a lie would travel from maine to georgia, and truth was still getting on its boots. the internet gives everybody the power to communicate and that certainly presents unique challenges. people often say things that aren't verifiably true but speak to their lived experiences.
3:36 am
i think we have to be careful restricting that. if somebody feels intimidated while voting, i believe they should be able to share their experience even if the election overall was fair. every text message, e-mail and video has to be fact checked before you hit send but at the same time we also don't want misinformation to spread that stops people from voting or causes other harms. at facebook we do a lot to fight misinformation. we remove content that could lead to imminent real world harm and have built a third party fact checking program, and we invest a lot in directing billions of people to authoritative information.
3:37 am
it's not possible to catch every piece of harmful content without infringing on peoples' freedoms in a way i don't think we would be comfortable with as a society. our approach was tested in 2020 when we took extraordinary steps, and we banned hundreds of militias and directed people to official results. we labeled over 180 million posts. we directed 140 million people to our official voting information center and helped 4.5 million people to register to vote. we did our part to secure the integrity of the election. then, on january 6th president trump gave a speech rejecting the results and calling on people to fight. the attack on the capitol was an outrage and i want to express my sympathy to all the members, staff and capitol police
3:38 am
officers that had to face this. i believe the former president should be responsible for his words and the people that broke the law should be responsible for their actions. so that leaves the question of the broader information ecosystem. i can't speak for everybody else, the tv channels, radio stations and news outlets and web sites and other apps, but i can tell you what we did. before january 6th, we worked with law enforcement to identifying threats, and we didn't catch everything but we made our services in hospitable to those that might do harm and when he feared he might super further violence we canceled his account.
3:39 am
we need an accountable process which is why we created an independent oversight board that can overrule our decisions, and we need democratically agreed rules for the internet. the reality is our country is deeply divided right now and that isn't something that tech companies alone can fix. we all have a part to play in helping to turn things around and i think that starts with taking a hard look at how we got here. some people say the problem is that social networks are polarizing us but that's not clear from the evidence or research. polarization was rising in america before social networks were invented. others claim that algorithms feed us content, and that's not accurate either. i know that technology can help bring people together. we see it every day on our
3:40 am
platforms. facebook is successful because people have a deep desire to connect and share, not to stand apart and fight. we believe that connectivity and togetherness are more powerful ideals than division and discord and that technology can be part of the solution to the challenges our society is facing. we are ready to work with you to move beyond hearings and get started on real reform. thank you. >> thank you, mr. zuckerberg. now, mr. pichai, you are recognized for five minutes. mr. pichai, are you unmuted? >> chairman, and ranking members, and members of the community, thank you for the
3:41 am
opportunity to appear before you today. to begin i want to express my sympathies that lost loved ones to covid or the recent gun violence in boulder and atlanta. in difficult times we are reminded of what connects us as americans, the hope that we can make things better for our families and our communities. and we at google are committed to that work. i joined google because i believed the internet was the best way to bring the benefits of technology to more people, and for the past three decades we have seen how it has inspired the best in society by expanding knowledge, providing opportunities for discovery and connection. i am proud that anybody can turn to google for help. whether they are looking for vaccine information, learning new skills on youtube or using digital tools to grow their business. in 2020 our products held 2 million u.s. businesses and
3:42 am
publishers generate 426 million in economic activity. we are energized to help people at school and humbled with the responsibility that comes with it. thousands of people at google are focused on everything from cyber attacks to privacy, to today's topic, misinformation. our mission is to organize the world's information, and make it universally accessible and useful. with that is providing trustworthy content and opportunities for free expression while combating misinformation. it's a big challenge without easy answers. 500 plus hours of video are uploaded to youtube every minute, and approximately 15 million google searches each day are new to us. 18 months ago nobody had heard of covid-19, and sadly that was the top trending search last
3:43 am
year. staying ahead of new challenges to keep users safe is a top priority. we saw the importance of that on january 6th when a mob stormed the u.s. capitol. google strongly condemns these violent attacks on our democracy and mourns the lives lost. in our response, on youtube videos that violated our incitement policies and began to give strikes to those in violation of the policy. it removed apps at the store inciting violence and stopped the capitol riots as part as our policy, and we were able to act quickly because we were prepared ahead of the 2020 elections. our reminders of how to register to vote were viewed over 2 billion times. youtube's election results panels have been viewed over 8
3:44 am
billion times. after the december 8th, we removed content about changing the election. globally, we have committed over $550 million in ad grants for covid related psa's to governments, government health organizations and non-profits. we also removed 850,000 videos and blocked nearly 100 million covid-related ads throughout 2020. across all of this work, we strive to have transparent policies, and our ability to provide a range of information and viewpoints while also being
3:45 am
able to remove misinformation is possible only because of legal frame works like section 230. its foundation has been a powerful force for good. i look forward to work to create a path forward for the next decades. >> thank you. the chair now recognizes mr. dorsey for five minutes. >> thank you, members of the energy and commerce committee and its sub committees for the opportunity to speak with the american people about how twitter may be used to spread disinformation and our solutions. my remarks will be brief so we can move to your questions and discussion. in our discussion today some of you might bring up specific tweets or examples and i will have answer answer like my team will follow-up with you, and i don't think that's useful. i would rather us focus on principles to approach these problems.
3:46 am
we believe in free expression. we believe in free debate and conversation to find the truth. at the same time we must balance that with our desire for our service not be used to sew confusion disinformation or destruction. our process to moderate content is designed to constantly evolve. we observe what is happening on our service. we work to understand the ramifications and we use that to strengthen our operations. we push ourselves to improve based on the best information we have. much of what we are likely to discuss today are entirely new situations the world never experienced before, and in some unique cases involve elected official. we believe the best way to face a big new challenge is through narrowing the problem to have the greatest impact. disinformation is a broad concept and we need to focus our approach on where we saw the greatest risks if we hope to have any impact at all so we chose to focus on disinformation
3:47 am
lead to off-line harm and three categories to start. many of you will have strong opinions on how effective we are in this work and some will say we are doing too much in removing free speech rights and some will say we are not doing enough and end up causing more harm. both points of view are reasonable and worth exploring. if we woke up tomorrow and decided to stop moderating content we would end up with a service very few people or advertisers would want to use. ultimately we are running a business and a business wants to grow the number of customers it serves, and enforcing policy is a business decision. different businesses and services will have different policies, some more liberal than others, and we believe it's critical this variety continues to exist forcing every business to perform the same diminishes free market ideals. if instead we woke up tomorrow
3:48 am
and decided to ask the government what content to take down or leave up, we would not be left with a service to question the government, and it's against the right of individuals. this would have enormous requirements on businesses that would further intrench only those that could afford to use it. so how do we resolve these two viewpoints? one way is to create shared protocols. social media has proven itself important enough to be worthy of an internet protocol, one that a company like twitter can contribute to and feed on creating experiences people love to use. we started work on such a protocol which we call blue sky, and it is intended to act as a social media protocol not owned by any single company or organization, and any developer.
3:49 am
any developer around the world can develop for it. greater transparency is the strongest benefit. anyone around the world can see everything that's happening in the network including exactly how it works. one doesn't have to trust a company. just look at the source code. second, since the base protocol is shared, it will increase innovation around business models, recommendation algorithms, and moderation controls which are in the hands of individuals rather than private companies. this will allow people to experiment in a market-based approach. finally, it will allow all of us to observe, acknowledge, and address any societal issues that arise much faster. having more eyes on the problems will lead to more impactful solutions that can be built directly into this protocol, making the network far more secure and resilient. a decentralized open source protocol for social media is our vision and work for the long
3:50 am
term. we continue the cycle mentioned earlier of constantly improving our approach to content moderation in the short term. i hope our discussion today will focus on more enduring solutions. one final note. we are a bunch of humans with a desire to make the world around us better for everyone living today and those that come after us. we make mistakes in prioritization and in execution. we commit to being open about these and doing our best to remedy what we control. we appreciate the enormous privilege we have in building technologies to host some of the world's most important conversations. and we honor the desire to greater about outcomes for everyone who interacts with them. thanks for your time and i look forward to the discussion. >> thank you, mr. dorsey. we've concluded witness opening statements. at this time we will move to member questions. i want to make sure that members are aware that our witnesses are being assisted by counsel, and during questions our witnesses may briefly mute themselves to seek advice of counsel, which is
3:51 am
permitted. each member will have five minutes to start asking questions of our witnesses. i ask everyone to please adhere to that five-minute rule as we have many people that want to ask questions. i will start by recognizing myself for five minutes. >> mr. chairman, point of order. >> the gentleman, who is speaking? >> jeff duncan, point of order. >> yes, sir. >> if the witnesses are advised by counsel and we're not swearing them in, why would they need counsel? >> in previous hearings, we've always permitted witnesses to have counsel. sometimes you'll see them at a hearing just leaning back and talking to their counsel before a question. but it's allowed under our rules. and i just want to make members aware they may mute themselves while that's going on. >> they should be sworn in. but i yield back.
3:52 am
>> thank you. gentlemen, my team is short and i ask that you make your responses as brief and to the point as possible. if i ask you a yes-or-no question, i'm just looking for a yes or no. so please respond appropriately. i want to start by asking all three of you if your platform bears some responsibility for disseminating disinformation related to the election and the stop the steal movement that led to the attack on the capitol, just a yes-or-no answer. mr. zuckerberg. >> chairman, i think our responsibility is to build systems that can -- >> mr. zuckerberg, i just want a yes-or-no answer, okay? yes or no, do you bear some responsibility for what happened? >> congressman, our responsibility is to make sure that we build effective systems. >> okay. the gentleman chooses not to answer the question. mr. pichai, yes or no. >> we always feel a deep sense of responsibility. but i think we worked hard this election if it was one of our
3:53 am
more substantive efforts. >> is that a yes or a no? >> congressman, it's a complex question. >> okay. we'll move on. mr. dorsey. >> yes, but you also have to take into consideration the broader ecosystem. it's not just the technology platforms that are used. >> thank you. thank you. i agree with that. mr. zuckerberg, independent analysis has shown that despite all the things that facebook did during the election, users still interacted with election misinformation roughly 1.1 billion times over the last year. the initial stop the steal group started on facebook and gained over 350,000 followers in less than a day. faster than almost any other in your platform's history. and they were immediately calling for violence. in mid-december, you stopped promoting high quality news outlets for election content at a time when the disinformation was at its height. and final, the fbi has released numerous documents showing that many of the insurrectionists
3:54 am
used facebook to coordinate and plan the attack on january 6. so my question is, how is it possible for you not to at least admit that facebook played a central role or a leading role in facilitating the recruitment, planning, and execution of the attack on the capitol? >> chairman, my -- my point is that i think that the responsibility here lies with the people who took the actions to break the law and do the insurrection. and secondarily, also the people who spread that content, including the president, but others as well, with repeated rhetoric over time, saying that the election was rigged and encouraging people to organize, i think that those people bear the primary responsibility as well. and that was the point that i was making. >> i understand that, but your platforms super charged that. you took a thing and magnified it. in 12 hours you got 350,000
3:55 am
people on your site. you ginned this up. your algorithms make it possible to super charge these kinds of opinions. i think we're here because of what these platforms enable, how your choices, you know, put our lives and our democracy at risk. and many of us just find it just unacceptable. i want to ask each of you another question. do you think vaccines that have been approved for covid-19 work? just yes or no, do you think the vaccines that have been approved work? mr. zuckerberg. >> yes. >> mr. pichai. >> yes, absolutely. >> mr. dorsey. >> yes, but i don't -- i don't think we're here to discuss our own personal opinions. >> i just want to know if you think the vaccines work. yes? >> yes, however -- >> thank you. so if you think the vaccines work, why have your companies allowed accounts that repeatedly offend your vaccine disinformation policies to remain up? according to a report, just 12
3:56 am
accounts on facebook, twitter, and instagram, account for 65% of all the vaccine disinformation on your platforms. you're exposing tens of millions of users to this every day. i don't have the stats on youtube but my understanding is it's similar. so my question is, why in the midst of a global pandemic that has killed over half a million americans, that you haven't taken these accounts down that are responsible for the preponderance of vaccine disinformation? will you all commit to taking these platforms down today? mr. zuckerberg. >> congressman, yes, we do have a policy -- >> i know you have a policy, but will you take the sites down today? >> congressman, i will need to look at -- have our team look at -- >> have an answer for me tomorrow because they still exist. we found them as early as last night. mr. pichai, how about you?
3:57 am
>> we have removed over 850,000 videos -- >> can you remove them all? do you still have people spreading disinformation on your platforms? there are about 12 super spreaders. >> we have policies and we take down content. some of the content is allowed if it's people's personal experiences but, you know, we definitely -- >> okay, thank you. i see my time is getting expired. mr. dorsey, will you take these sites down? you've got about 12 super spreaders. will you take them down? >> yes, we remove everything against our policy. >> thank you. i see my time has expired. i will now yield to the ranking member for his five minutes. >> i thank my friend for yielding. amanda met a man online who took inappropriate screenshots of amanda and proceeded to follow her around the internet and harass her for years. he found her classmates on
3:58 am
facebook. he would send the pictures he took. to cope with the anxiety, amanda turned to drugs and alcohol but it became too much for her. mr. zuckerberg, clearly ms. todd was underaged so the photo the harasser shared was illegal. do you believe facebook plays any responsibility in the role it played in her death, yes or no? >> sorry, i was muted. congressman, that is -- it's an incredibly sad story. and i -- i -- i think that we certainly have a responsibility to make sure that we're building systems that can fight and remove this kind of, umm, harmful content. in the case of, umm, child exploitation content, we've been building systems for a long time that use ai. we have thousands of people
3:59 am
working on being able to identify this content and remove it. i think our systems are generally pretty effective at this. and i think it's our responsibility to make sure -- >> my time is pretty short, but would you say yes or no, then? >> sorry. can you repeat that? >> well, in the question, yes or no, then, any responsibility? >> congressman, i believe that -- >> let me move on, because i've got very short on time. do you believe facebook should be held accountable for any role in her death, yes or no? >> congressman, the responsibility that i think platforms should have -- >> okay. i'm going to have to take it that you're not responding to the question. unfortunately stories like amanda todd's are only becoming more common. we often talk about how your platforms can be used for good or evil. the evil seems to persevere. mr. zuckerberg, you've stated that you support thoughtful change to section 230 to ensure that tech companies are held
4:00 am
accountable for certain actions that happen on your platforms such as child exploitation. what specific changes do you support in section 230? >> thanks, congressman. i would support two specific changes, especially for large platforms, although i want to call out that i think for smaller platforms, i think we need to be careful about -- about any changes that we make that remove their immunity because that could hurt competition. let me call upon these for larger platforms. first, platforms should have to issue transparency reports that state the prevalence of content across all different categories of harmful content, everything from child exploitation to terrorism to incitement of violence to to intellectual property violations. >> let me ask you, where would
4:01 am
those transparency reports be reported to and how often do you think those should be going out? >> congressman, as a model, facebook has been doing something to this effect for every quarter, right, where we report on the prevalence of each category of harmful content and our effective our systems are at identifying that content and removing it in advance. and i think the companies should be held accountable for having effective systems to do that broadly. the second change that i would propose is creating accountability for the large platforms to have effective systems in place to moderate and remove clearly illegal content. so things like sex trafficking or child exploitation or terrorist content. and i think it would be reasonable to condition immunity for the larger platforms on having a generally effective system in place to moderate clearly illegal types of content.
4:02 am
>> let me interrupt real quickly there, i'm running really short on time. i know in your testimony you're talking about that you say platforms should not be held liable if a particular piece of content evades this detection. so again, that's one of the areas, when you talk about the transparency and also the accountability, that i would like to follow up on. i have to go on real quick. mr. pichai, yes or no, do you agree with mr. zuckerberg's changes to section 230? >> they are definitely good proposals around transparency and accountability which i've seen in various legislative proposals as well, which i think are important principles and we would certainly welcome legislative approaches in that area. >> okay. mr. dorsey, do you agree with mr. zuckerberg, yes or no? the changes on 230. >> i think the ideas around transparency are good. i think it's going to be very hard to determine what's a large
4:03 am
platform and what's a small platform. that may incentivize things. >> thank you very much, my time is expired, i yield back. >> the share recognizes chair schakowsky for five minutes. >> thank you so much. mr. zuckerberg, immediately after the capitol insurgency, sheryl sandberg did an interview in which she insisted the siege was largely planned on smaller platforms. but court filings actually show something quite the opposite, that the proud boys and oath keepers used facebook to coordinate in real time during the siege. and so my question for you is, will you admit today that
4:04 am
facebook groups in particular played a role in fomenting the extremism that we saw and that led to the capitol siege? >> congressman, thanks for the question on this. in the comment that sheryl made, i believe that what we were trying to say was, umm, and what i stand behind, umm, is what was widely reported at the time, that, uh, after -- >> i'm sorry to interrupt, as many of my colleagues have had to do, because we only have five minutes. but would you say that -- and would you admit that facebook played a role? >> congressman, i think certainly there was content on our services. and, umm, from that perspective, i think that there's further work we need to do to make our services and moderation more effective.
4:05 am
one of the things -- >> okay. i'm going to ask mr. pichai a question. many companies have used section 230 as a shield to escape consumer protection laws. and i have a bill that would actually not protect companies that do that. and so mr. pichai, would you agree that that would be a proper use, to not allow liability protection for those who violate consumer protection laws? >> congresswoman, consumer protection laws are very important in many areas like we comply with hipaa. i think the right approaches to have legislation in applicable areas and have us -- >> i'm going to have to interrupt again. is that a yes?
4:06 am
that if a law has been broken, a consumer protection law, that it would not -- there would not be liability protection under section 230 for you? >> on the liability protections to actually take strong action in particularly new types of content, when the christchurch shooting happens, within a few minutes our teams have to make decisions about the content to take down, that certainly is what we rely on. i agree with you that we should have strong consumer protection laws and be subject to it and have agencies like the ftc have oversight over those laws and how we comply with them. >> let me just ask -- thank you -- a real yes or no, quickly. do you think that when you take money to run advertisements that promote disinformation, that you are exempt from liability, yes
4:07 am
or no? yes or no? mr. zuckerberg. yes or no? >> congresswoman, i don't know the legal answer to that. but we don't allow misinformation in our ads. any ad that's been fact checked as false, we don't allow to run as an ad. >> mr. dorsey? >> again, i also would need to review the legal precedent. but we would not allow that. >> okay. and mr. pichai? >> we are subject to fdc's policies. >> let me ask one more question. do you think that section 230 should be expanded to trade agreements that are being made as happened in the u.s. trade
4:08 am
agreement with mexico and canada? yes or no? mr. zuckerberg. >> congresswoman, my primary goal would be to update section 230 to reflect the modern reality we've learned over the last few years. i do think -- >> i hear you, but i'm talking now about trade agreements. mr. pichai. >> congresswoman, i think there is value in it. if there is evolution of sections 230, that should apply, so in a flexible way, being able to do that would be good, i think. >> mr. dorsey. >> i don't fully understand the ramifications of what you're suggesting. so -- >> to have a liability shield that would be international and ratified in trade agreements.
4:09 am
i think it's a bad idea. >> the gentlelady's time has expired. >> thank you, i yield back. >> the chair recognizes mr. bilirakis, ranking member of the subcommittee on consumer protection and commerce, for four minutes. >> thank you, i appreciate it. dr. -- mr. dorsey, you heard briefly about what i'm hearing again in my district, my opening remarks, you've heard them. the other key part with these stories that we're hearing when we conduct these surveys is how we empower law enforcement. in a hearing last year we received testimony that since 2016, twitter has intentionally curtailed sharing threat data with law enforcement fusion shares. here is the question. you're well aware that on twitter and periscope, that traffic has increased from bad
4:10 am
actors seeking to groom children for molestation, lure females into sex trafficking, sell illegal drugs, incite violence, even threaten to murder police officers. are you willing to reinstate this cooperation, retain evidence, and provide law enforcement the tools to protect our most vulnerable, yes or no? >> first, child sexual exploitation has no place on our platform, and i don't believe that's true. we work with local law enforcement regularly. >> so you're saying that this is not true? what i'm telling you. are you willing to reinstate, reinstate, in other words it's not going on now, reinstate this cooperation with law enforcement, to retain evidence and provide law enforcement the tools to protect our most vulnerable? >> we would love to work with you in more detail on what you're seeing. but we work with law enforcement
4:11 am
regularly. we have a strong partnership. >> so you're saying that this is not true, what i'm telling you? >> i don't believe so. but i would love to understand the specifics. >> will you commit to doing what i'm telling you you're not doing in the future and work with me on this? >> we'll commit to continue doing what we are doing. >> and what is that? you're saying that -- >> working with local law enforcement. >> okay. well, let me go on to the next question, but i'm going to follow up with this to make sure you're doing this. our children's lives are in jeopardy here. mr. zuckerberg, we have heard you acknowledge mistakes about your products before. there are now media reports of an instagram for under 13 being launched. my goodness. between this and youtube kids, you and mr. pichai have
4:12 am
obviously identified a business case for targeting this age bracket with content. and i find that very concerning, targeting this particular age bracket, 13 and under. giving them free services. how exactly will you be making money? or are you trying to monetize our children too and get them addicted early? and will you be allowing your own children to use this site with the default settings? we're talking about, again, the site that apparently is being launched for children under 13 -- 13 and under, or under 13, actually. can you please answer that question for me? >> congressman, we're early in thinking through how this service would work. there is clearly a large number of people under the age of 13 who would want to use a service like instagram.
4:13 am
we currently do not allow them to do that. >> what would be beneficial to our children to launch this kind of service? >> well, congressman, i think helping people stay connected with friends and learn about different content online is broadly positive. there are clearly issues that need to be thought through and worked out, including how parents can control the experience of kids, especially kids under the age of 13. we haven't worked through all that yet. so we haven't kind of formally announced the plans. but i think that something like this could be quite helpful for a lot of people. >> excuse me. okay. i'll reclaim my time. mr. pichai, your company has had failures curating content for kids. what advice would you offer your colleague here? >> congressman, we have invested a lot in a one-of-a-kind product, youtube kids. the content there is -- you know, we work with trusted
4:14 am
content partners. think sesame street as an example of the type of channel you would find there. science videos and cartoons. we take great effort to make sure -- >> i'll reclaim my time. i have one last question for mr. zuckerberg. do you have concerns with what has appeared on your platform posted by youtube? and with regard to your children, but in general, do you have concerns, yes or no? >> congressman, are you asking me about youtube? >> yes, i'm asking you about youtube. >> congressman, i use youtube to watch educational videos with my children. >> do you have concerns? personally, for your children and your family, personally do you have concerns. >> congressman, my children are,
4:15 am
uh, 5 and 3 years old. so when i watch content on youtube with them, i'm doing it and supervising them. so in that context, no, i haven't particularly had concerns. but i think it's important that if anyone is building a service for kids under the age of 13 to use by themselves, that there are appropriate parental controls. >> the gentleman's time has expired. >> thank you. >> i ask all members to try to stick to our five-minute rule so we can get out of here before midnight. the chair will now recognize the full committee chair for five minutes. >> thank you, chairman doyle. my questions are of mr. zuckerberg and mr. pichai. i just want to say, after listen to the two of your testimony, you don't give the impression that you're actively in any way promoting this misinformation and extremism. i totally disagree with that. you're not nonprofits or
4:16 am
religious organizations that are trying to do a good job for humanity. you're making money. the point we're trying to make today, or at least i am, is that when you spread this misinformation, extremism, actively promote it and amplify it, you do it because you make more money. i kind of deny the basic premise of what you said. but let me get to the questions. let me ask mr. zuckerberg, according to a may 2020 "wall street journal" report, one of facebook's own reports i trust you will provide all documents and information, but my question is, and please yes or no, were you aware of this research, showing that 64% of the members in the extremist facebook groups studied joined because of facebook's own recommendations tools, joined these extremist
4:17 am
groups in germany, were you aware of that, yes or no? >> congressman, this is something we've studied -- >> i'm asking if you were aware of it, it's a simple question, yes or no. were you aware of it, that's all i'm asking. >> aware at what time? >> i just asked if you were aware of it, mr. zuckerberg, yes or no. if not i'm going to assume the answer is yes, okay? >> i've seen the study. it was about a -- >> all right. i appreciate that. let me go to the second question which relates to that. you said yes. okay. the troubling research i mentioned demonstrates that facebook was not simply allowing disinformation and extremism to spread, it actively amplified it and spread it. this is my point. nonetheless, facebook didn't permanently stop recommending political and civil groups to the united states until after the january 6 insurrection, years after it was made aware of this research. the fact that facebook's own recommendation system helped populate extremist groups compels us to reevaluate
4:18 am
platforms' liabilities. now, back to that "wall street journal" article. facebook's chief product officer chris cox championed an internal effort to address division on facebook and proposed a plan that would have reduced the spread of contact by hyperactive users on the far left and far right. the article alleges, mr. zuckerberg, you personally reviewed this proposal and approved it but only after its effectiveness was decreased 80%. is that true? yes or no, please. >> congressman, we've made a lot of -- of measures that -- >> did you approve it after its effectiveness was decreased 80%, yes or no? >> congressman, i can't speak to that specific example. but we've put in place a lot of different measures, and i think they're effective, including -- >> did you review the proposal and approve it? >> congressman, we do a lot of work in this area and i review a lot of proposals and we move -- >> it's not a difficult question.
4:19 am
i'm just asking if you reviewed this internal proposal and you approved it. and you won't even answer that. i don't -- it's so easy to answer that question. it's very specific. all right. you won't answer, right, yes or no? >> congressman, that's not what i said. i said i did review that, in addition to many other proposals and things that we've taken action on, including shutting off recommendations for -- >> did you approve it with the 80% decrease in effectiveness? >> congressman, i don't remember that specifically. we've taken a number of different steps on this. >> fine. let me ask mr. pichai. mr. pichai, according to "the new york times," youtube's recommendation algorithm is responsible for more than 70% of the time users spend on youtube, in fact a former design ethicist on google is reported to have said, if i'm on youtube, i'm always going to stir you towards crazy town.
4:20 am
is youtube's algorithm designed to encourage users to stay on the site, yes or no? >> content response is our number one goal. >> i'm only asking very simply whether youtube's recommendation algorithm is designed to encourage users to stay on the site. simple question, yes or no. >> and that's not the sole goal, congressman. >> but it is one of the goals, so the answer is yes, okay. so the bottom line is, simply put, your companies' bottom line is to amplify dangerous content, you're not innocent bystanders. that's why congress has to act, because you're not bystanders. you're encouraging this stuff. thank you, mr. chairman. >> the gentleman's time is expired. the chair now recognizes ms. rogers, full committee ranking member, for five minutes. >> we tragically lost a number of young people to suicide in my community. in a three-year period, from
4:21 am
2013 to 2016, the suicide rate more than doubled in spokane county. in the last six months, one high school lost three teens. right now suicide is the second leading cause of death in the entire state of washington for teens 15 to 19 years old. as i mentioned, it's led to many painful conversations. trying to find some healing for broken families and communities. and together, we've been asking, what's left our kids with a deep sense of brokenness? why do children, including kids we've lost in middle school, feel so empty at such a young, vulnerable age? some studies are confirming what parents in my community already know. too much time on screens and social media is leading to loneliness and despair. and it seems to be an accepted truth in the tech industry. because what we're hearing today, making money is more important.
4:22 am
bill gates put a cap on screen time for his daughter. steve jobs once said in a quote, we limit how much technology our kids use at home. mr. zuckerberg, you've also said you don't want your kids sitting in front of screens passively consuming content. so mr. zuckerberg, yes or no, do you agree too much time in front of screens passively consuming content is harmful to children's mental health? >> congresswoman, the research that i've seen on this suggests that if people are using computers and v. >> i'm sorry, can you answer yes or no? >> i don't think the research is conclusive on that. i can summarize who i've learned, if that's helpful. >> i'll follow up at a later time. because i do know that facebook
4:23 am
has acknowledged that passive consumption on your platform is leading to people feeling worse. and you've said that going from video to video is not positive. yet facebook is designed to keep people scrolling. instagram is designed to get users to go from video to video. so i would like to ask you, if you said earlier you don't want kids sitting in front of the screens passively consuming content and your products are designed to increase screen time, do you have currently have any limitations on your own kids' use of your products? how do you think that will change as they get older? >> sure, congressman, my daughters are 5 and 3 and they don't use our products. actually that's not exactly true, my eldest daughter max, i let her use messenger kids sometimes to message her cousins. overall, the research we've seen is that using social apps to connect with other people can have positive mental health benefits and wellbeing benefits
4:24 am
like helping people feel more connected and less lonely. passively consuming content doesn't have those positive benefits to wellbeing but isn't necessarily negative, it just doesn't promote wellbeing. it's a common misconception that our teams even have goals of trying to increase the amount of time that people spend. the news team at facebook -- >> thank you. mr. zuckerberg, i do have a couple of more questions. so do you agree that your business model and the design of your products is to get as many people on the platform as possible and to keep them there for as long as possible? if you could answer yes or no, that would be great. >> congresswoman, from a mission perspective, we want to serve everyone. but our goal is not -- we don't -- i don't give our news feed team, our instagram team, goals around increasing the amount of time that people spend. i believe if we develop a useful product -- >> thank you. we all have limited time.
4:25 am
i think the business model suggests that it is true. it was mentioned earlier that you're studying extremism. i would like to ask yes or no of all of you, beginning with mr. zuckerberg, has facebook conducted any internal research as to the effect your products are having on the mental health of our children? >> congresswoman, i know that this is something that we -- that we -- >> could you say yes or no? i'm sorry. >> i believe the answer is yes. >> okay. mr. dorsey, has twitter? >> i don't believe so. but we'll follow up with you. >> okay. mr. pichai, has google conducted any research on the effect your products are having on the mental health of children? >> we consult widely with expert third parties on this area including samsa and other mental health organizations. >> okay. i would like to see that. it sounds like you've studied extremism. let's get focused on our children. >> the gentlelady's time has
4:26 am
expired. the chair now recognizes mr. rush for five minutes. bobby, you need to unmute. there you go. you're still muted. >> good morning and thank you, mr. chairman. we all agree that social media sites are tools for stoking racial division or exacerbating racial injustice. however, there is a growing volume of research that demonstrates the disproportionate effects of this information and white supremacist extremism on women and people of color, especially black people. we have seen and continue to seen too often social media
4:27 am
sites put their earnings before anything else. simply stated, your corporations carelessly prioritize profits over people. misinformation, conspiracy theories, and incendiary content targeting minorities remains permanent. and social media companies, your companies, are profiting from hate on these platforms by harnessing data and generating advertising revenue from such content. there is only one comparison that remotely approaches the moral actions of your companies
4:28 am
and that is our nation's shameful and inhumane past. this is the very reason why i ask mr. dorsey, i remember you at our 2018 hearing, to commit to commissioning an independent third party civil rights on twitter. this request at the hearing was followed up with my joint letter from chairman pallone and myself confirming that commitment. it is three years later and i am still waiting, mr. dorsey, for the results of that audit.
4:29 am
where is that audit, mr. dorsey? >> thank you. we, umm, we've taken another approach, which is to work with civil rights orgs on a regular basis. we have regular conversations with civil rights orgs multiple times a year. >> mr. dorsey, where is the audit that members of congress, including the chairman of the committee, where is the audit that we asked you and you agreed to forward? >> we don't have it. we sought a different approach. >> you remain very, very disingenuous. you intentionally lied to the committee. and you should be condemned for that. and i can't wait until we come out with legislation that will deal with you and your
4:30 am
colleagues. you haven't taken this seriously. and mr. dorsey, i am a black man in america. my experiences are different from your experiences. this audit is very, very important to me and to those who are similarly situated as i am. facebook, to their credit, has completed an audit. and there is no reason, simply no reason for you to not have completed that audit. mr. dorsey, the impact from
4:31 am
covid-19, misinformation about combating covid-19, misinformation targeted at african-americans. >> yes on both, and we review with civil rights orgs directly on a regular basis. that is the solution we chose. >> the gentleman's time has expired. the chair recognizes ms. brockton for five minutes. >> thank you, mr. chairman. as i listen to this hearing, like it or not, it sounds like everybody on both sides of the aisle is not very happy. i think we all believe there is
4:32 am
responsibility to be shared by the three of you. i would offer or speculate, i guess you could say, that we're going to see some changes in section 230. you know, the president, former president trump, vetoed a pretty big bill, the defense bill, earlier last year, over this very issue, because he wanted a total repeal. he didn't get it. but i know that the senate now has got some legislation that's pending that's looking at a couple of reforms. my sense is that we may see something here in the near future as well. i serve on -- as one of only two house members on the commission on combatting synthetic opioid trafficking.
4:33 am
it's a multifederal agency. it's co-chaired by david coen in the house and tom cotton in the senate. there is concern we all have, not only as parents but as community leaders across the country, on opioids and the inability to remove illegal offers of opioids, steroids, even fake covid-19 vaccines, very troubling, as we see some of these platforms push such content to a user in real search of it. so i guess my first question is to you, mr. zuckerberg. the sale of illegal drugs on your platform does violate your policy, yet it does remain a problem on your platforms. can you explain the resources that you currently have devoted to addressing the issue and whether or not you plan to develop more? this is an issue that i intend to raise with the commission as we look forward to this in the
4:34 am
next number of months. >> thanks, congressman. i think this is an important area and a good question. we have more than a thousand engineers who work on what we call our integrity systems that are basically ai systems that try to help find content that violates our policies. your right that that content does violate our policies. we also have more than 35,000 people who work in content review, umm, who basically are either responding to flags that they get from the community or checking things that our ai systems flag for them but aren't sure about. and this is an area, when we're talking about reforming section 230, where i think it would be reasonable to expect that large platforms especially build effective systems to be able to combat and fight this kind of clearly illegal content. i think there will be a lot of ongoing debate about how to handle content which people find
4:35 am
distasteful or may be harmful but it legal. but in this case, when the content is illegal, i think it is pretty reasonable to expect that large platforms build effective systems for moderating this. >> so we saw earlier this week, of course we don't know all the facts of this terrible shooting in boulder, colorado, it appears at least some of the initial reports that the alleged shooter was in fact bullied. and i think i saw some press reports that some of that had happened online as well. what process do you have that would allow parents or families to be able to pursue antibullying efforts that might be on your platforms? >> thanks, congressman. i think bullying is a really important case to consider for section 230, because first of all, it's horrible, and we need to fight it. and we have policies that are against it. but also it is often the case that bullying content is not
4:36 am
clearly illegal. so when we talk about needing the ability under something like section 230 to be able to moderate content which is not only clearly illegal content but broader, one of the primary examples that we have in mind is making sure that we can stop people from bullying children. here we work with a number of advocacy groups, we work with law enforcement to help fight this. this is a huge effort and part of what we do. i think it's extremely important. >> other than taking the approach that you don't want to see any changes to 230, what suggestions might you have for us as we examine this issue? >> sorry, congressman, i'm not saying that i don't think there should be changes. i think 230 broadly is important, i wouldn't repeal the whole thing. but the three changes i've basically suggested, one is around transparency, that large platforms should have to report
4:37 am
on a regular cadence for each category of harmful content, how much that have harmful content they're finding and how effective their systems are at dealing with it. the second thing i think that we should do is hold large platforms to a standard where they should have effective systems for handling clearly illegal content like opioids, or child exploitation and things like that. and the third thing that i think is an important principle is that these policies really do need to apply more to large platforms. and i think we need to find a way to exempt small platforms so that way, you know, when i was getting started with facebook, if we had gotten hit with a lot of lawsuits around content, it might have been prohibitive to get started. i don't want to see the next platforms being stopped from getting started. >> the gentleman's time has expired. the chair recognizes ms. eshoo. >> thank you, mr. chairman.
4:38 am
we're in california, it's good morning for us. i want to start by saying that content moderation like removing posts or banning accounts is about treating symptoms. and i think that we need to treat symptoms, but i also think that we need to address two underlying diseases. the first is that your products amplify extremism. the second is that your business models of targeted ads enable misinformation to thrive because you chase user engagement at great cost to our society. so to mr. pichai, last month the antidefamation league found that youtube amplifies extremism. scores of journalists and researchers agree. and here's what they say happens. a user watching an extremist video is often recommended more such videos, slowly radicalizing
4:39 am
the user. youtube is not doing enough to address recommendations. and it's why representative mal-ikowski and myself introduced the protecting americans act to amend section 230 so folks can examine the role of algorithmic amplification that leads to violence. it's also why i along with who of my house colleagues wrote to each of you about this issue. and mr. chairman, i ask that those letters be placed into the record. so my question to you, mr. pichai, is, are you willing to overhaul youtube's core recommendation engine to correct this issue? yes or no? >> congresswoman, we have overhauled the recommendation systems. i know you've engaged on these issues before. pretty substantially, in pretty much any area --
4:40 am
>> mr. pichai, yes or no? because we still have a huge problem. i outlined -- are you saying that the antidefamation league doesn't know what they're talking about? you know, all these journalists and researchers, there is a lot more to address. and that's why i'm asking you if you're willing to overhaul youtube's core recommendation engine to correct this. it's serious. it's dangerous. what more can i say about it? yes or no? >> congresswoman, if i may explain. >> n, i don't have time to explain. you know, let me just say this to the witnesses. we don't do filibuster in the house. that's something that's done in the senate. so a filibuster doesn't work with us. to mr. zuckerberg, your algorithms use unseemly amounts
4:41 am
of data to keep users on your platform because that leads to more ad revenue. now, businesses are in business to make money, we all understand that. but your model has a cost to society. the most engaging posts are often those that induce fear, anxiety, anger. and that includes deadly, deadly misinformation. the center for countering digital hate found that the explore and suggested posts parts of instagram are littered with covid misinformation, election disinformation, and qanon posts. so this is dangerous. and it's why representative schakowsky and i are doing a bill that is going to ban this business model of surveillance advertising. so are you willing to redesign your products to eliminate your focus on addicting users to your platforms at all costs?
4:42 am
yes or no. >> congresswoman, as i said before, the teams that design our -- >> you know what, i think -- let me just say this, and i think it's irritating all of us, and that is that no one seems to know the word "yes" or the word "no." which one is it? if you don't want to answer, just say i don't want to answer. so yes or no. >> congresswoman, these are nuanced issues. >> okay. so that's a no. to mr. dorsey, as chairwoman of the health subcommittee, i think that you need to eliminate all covid misinformation and not label or reduce its spread but remove it. i looked at a tweet this morning, robert kennedy jr. links the death of baseball legend hank aaron to the covid vaccine even though fact checkers debunked the story. the tweet has 9,000 retweets.
4:43 am
will you take this down? why haven't you? and also why haven't you banned the 12 accounts that are spewing deadly covid misinformation? this could cost lives. >> no, no, we won't take it down, because it doesn't violate our policy, we have a clear policy in place. >> what kind of policy is that? is it a policy for misinformation? >> no. >> the gentlelady's time has expired. the chair recognizes mr. scalise. is mr. scalise here? >> thank you. >> there we go. >> thank you, mr. chairman. i want to thank you for having this hearing. i want to thank our three witnesses for coming as well. clearly you're seeing a lot of concern being expressed by members on both sides, both republican and democrat, about
4:44 am
the way that your social media platforms are run, and especially as it relates to the fairness and equal treatment of people. i know i've had a lot of concerns, i've shared it with some of you individually over the last few years, about whether it's algorithms that seem to be designed to have a bias against conservatives. we all agree, whether it's illegal activity, bullying, those things ought not be permeated through social media. but there's a big difference between stopping bullying and violent type of social media posts versus actual censorship of political views that you disagree with. and i think i want to ask my first question to mr. dorsey, because there have been a lot of concerns expressed recently about that unequal treatment. i'll start with "the new york post" article, a lot of people have seen this, this article was censored by twitter when it was originally sent out.
4:45 am
this is "the new york post" which is a newspaper that goes back to 1801, founded by alexander hamilton. and for weeks, this very credibly sourced article right before an election about hunter biden was banned by twitter. and then when you contrast that, you have this "washington post" article that was designed to misportray a conversation between president trump and the georgia secretary of state, since been -- parts of this have been debunked, yet this article can still be tweeted out. i want to ask mr. dorsey, first of all, do you recognize that there is this real concern that there is an anticonservative bias on twitter's behalf and will you recognize that this has to stop if this is going to be -- twitter is going to be viewed by both sides as a place where everybody's going to go to the fair treatment? >> we made a total mistake with "the new york post." we corrected that within 24
4:46 am
hours. it was not to do with the content. it was to do with the hacked materials policy. we have an incorrect interpretation. we don't write policy according to any particular political leaning. if we find any of it, we root it out. we make mistakes, we will make mistakes. our goal is to correct them as quickly as possible, in that case we did. >> i appreciate you recognizing that was a mistake. "the new york post"'s entire twitter account was blocked for two weeks. to censor, we've got a first amendment too, to censor a newspaper that's highly respected, again, 1801, founded by alexander hamilton, for their entire account to be blocked for two weeks by a mistake, it seems like a really big mistake. was anyone held accountable in your censoring department for that mistake? >> we don't have a censoring department. i agree -- >> who made the decision then to
4:47 am
block their account for two weeks? >> we didn't block their accounts for two weeks, we required them to delete the tweet, and they could tweet it again. >> even though the tweet was accurate? you've seen the conversations on both sides about section 230, and there's going to be more discussion about it. but you're acting as a publisher. if you're telling a newspaper that they've got to delete something in order for them to be able to participate in your account, you're no longer hosting a town square, you're acting as a publisher when you do that. >> it was literally just a process error. this was not against them in any particular way. we require, if we remove a violation, we require people to correct it. we changed that based on not wanting to delete that tweet, i agree with that, i see it, it is something we learned.
4:48 am
>> now let me go to the "washington post" article. there are tweets today on your service that mischaracterize it in a way that even "the washington post" admitted it's wrong yet those recharacterizations can still be tweeted. will you address that and take those down to reflect what even "the washington post" themselves have admitted is false information? >> our misleading information policies address public health and civic integrity. that's it. >> i would hope you would go and take that down. i know you said in your opening statement, mr. dorsey, that twitter is running a business, and you said, quote, a business wants to grow the customers it serves, just recognize, if you become viewed and continue to become viewed as an anticonservatively biased platform, there will be up to compete and ultimately take
4:49 am
millions of people off twitter. i will hope you recognize that. and i would yield back the balance of my time. >> gentleman's time is expired. the chair now recognizes mr. butterfield for five minutes. >> thank you, mr. chairman. mr. zuckerberg, last year in response to the police killing of george floyd, you wrote a post on your facebook page that denounced racial bias. it proclaimed black lives matter. you also announced that the company would donate $10 million to racial justice organizations. and, mr. dorsey, twitter changed its official bioto a black lives matter tribute. you pledged $3 million to an anti-racism organization started back by collin kaepernick. and mr. pichai, your company held a companywide moment of silence to honor george floyd. you announced $12 million in grants to racial justice organizations. the ceo of google subsidiary youtube wrote, we believe black lives matter, and we all needle
4:50 am
to do more to dismantle systemic racism, end of quote. youtube also announced it would start a $100 million fund of black creators. now, all of this sounds nice, but these pronouncements, gentlemen, these pronouncements and money donations do not address the way your companies own products, facebook, twitter, and youtube have been successfully weaponized by racists and are being used to undermine social justice movements to suppress voting in communities of color and spread racist content and lies. and so, gentlemen, in my view, your companies have contributed to the spread of race-based extremism and voter suppression. as the "new york times" noted last year, it's as if the heads of mcdonalds, burger king, and taco bell all got together to fight obesity by donating to a vegan food co-op rather than lowering their calories, end of quote. gentlemen, you could have made
4:51 am
meaningful changes within your organizations to address the racial biases built into your products and donated to these organizations. but instead we are left with another round of passing the buck. america is watching you today. this is a moment that begins a transformation of the way you do business and you must understand that. perhaps a -- of diversity been your organizations -- lack of diversity within your organizations has contributed to this. the initiative has been working for years to increase diversity and equity in tech companies at all levels, and you know that because we have visited with you in california. we founded this initiative in 2015 with the hope that by now the tech workforce would reflect the diversity of our country. here we are 2021, i acknowledge that you have made some modest advancements but not enough. there must be meaningful representation in your companies
4:52 am
to design your products and services in ways that work for all americans. and that requires public accountability. history has shown that you have talked the talk but have failed to walk the walk. it appears now that congress will have to compel you, perhaps with penalties, to make meaningful changes. and i am going to try the yes or no answer, and hopefully i will have better results than my colleagues. mr. zuckerberg, i'll start with you. and please be brief, yes or no. would you oppose legislation that would require technology companies to publicly report on workforce diversity at all levels? >> congressman, i don't think so, but i need to understand it in more detail. >> we have talked about that, and i hope that if we introduce this legislation, you will not oppose it. what about you, mr. dorsey? would you oppose a law that made workforce diversity reporting a requirement? >> no, i wouldn't oppose it.
4:53 am
it does come with some complications in that we don't always have all the demographic data for our employees. >> well, thank you for that. and we talked with you and your office some years ago and you made a commitment to work with us, but we need more. what about you, mr. pichai? are you willing to support -- well, would you be willing to commit to -- would you oppose a law that made workforce diversity reporting a requirement? would you oppose it? >> congressman, we were the first company to publish transparency reports. we publish it annually. so happy to share that with you. we provide in the u.s. detailed demographic information on our workforce. >> the congressal black caucus has said we need greater diversity among your workforce from the top to the bottom. and we need for you to publish the data so the world can see it. that's the only way we're going to deal with diversity and
4:54 am
equity. thank you so much, mr. chairman. i heard you at the beginning committee gavel. and i yield back the ten seconds i have. chair now recognizes mr. guthrie ru, mr. chair, and thanks for the witnesses for being here. big tech decisions have real impact on people. that's why i ask my constituents using those platforms to share their experiences with me. and i'm here to advocate on their behalf. i received 450 responses. and one mainly thing i heard was the experience taking down religious content, which is important because there's a lot of religious organizations now streaming their services due to covid. i did have some one stance where a constituent wrote to me, quote, and this is what she posted. i am thankful god's grace is new every morning, and then facebook took it down, and my constituent said she got a notice from
4:55 am
facebook that it violated their policies around hate. and so i just want to discuss about this. i could ask you yes or no questions, mr. zuckerberg, on that. but i just want to talk it a little bit. i know that we don't want extreme language on the internet. i'm with you on that. and you can't watch everything. so you use algorithms to find that. so algorithms will flag things, some that are clearly obvious and some that probably shouldn't be flagged. but it seems to me that it seems to be bias in that direction. instead of just giving a yes-or-no question, i'm going to read that question. within that quote, what in there would get tripped up and would this quote get tripped up and put into the flagged category. it says, i am thankful god's grace is new every morning. and so i guess the question is what word or thought do you think would trip an algorithm for that quote, mr. zuckerberg? >> the congressman, it is not
4:56 am
clear to me why that post would be a problem. i would need to look into it in more detail. sometimes the systems look at patterns of posting. so if someone is posting a lot, then maybe our system thinks it's spam. but i would need to look into it in more detail. overall the reality is that any system is going to make mistakes. there's going to be content that we take down that we should've left up. and there's going to be content that we missed that we should've taken down that we didn't catch or that the systems made a mistake on. and at scale unfortunately those mistakes can be a large number even if it's a very small percent. but i think that that's why when we're talking about things like section 230 reform, i think it is reasonable to expect large companies to have effective moderation systems but not reasonable to expect that there are never any errors. but that i think that
4:57 am
transparency could help hold the companies accountable as to what accuracy and effectiveness they're achieving. >> okay. i think they did receive a notification it was for the hate policy. so -- and i understand there's going to be gray areas, whatever. but that quote, i don't see where the gray areas and how it could get caught up in that. >> i agree. >> thanks for your answer with that. i want to move on. so, mr. dorsey, i want to talk about the -- i didn't see that quote, but you said that didn't violate your policy. and in the context of that, i know cdc just recently updated its school guidance to make clear science says you can be three feet away and says you can be safe in schools. things are changing every day because we're learning more and more about this virus. how did that not violate your policy, rfk jr., and we have an rfk and jfk iii. but rfk jr., and the policy
4:58 am
towards that and then how do you keep up with what's changing so quickly, mr. dorsey? >> you know, we can follow up with you on the exact reasoning. but we have to recognize that our policies evolve constantly and they have to evolve constantly. as has been said earlier in this testimony, we observe what's happening as a result of our policy. we've got to understand the ramifications and we improve it. and it's a constant cycle. we're always looking to improve our enforcement. >> so, mr. zuckerberg, mr. pichai, just on that continuously evolving information on covid, because we are learning more and more about it, and how do you keep up, only about 30 seconds, so if you could quick answer for each of you, if you can. mr. pichai, maybe. >> on covid, we've been really taking guidance from cdc and other health experts
4:59 am
proactively. one thing we get to do in youtube is recommend higher quality content. we have show 400 million information panels last year including a lot from cdc and other health organizations. >> okay. thank you. and i yield back four seconds, mr. chair. >> thank you, mr. guthrie. chair now recognizes ms. matsui for five minutes. >> thank you very much, mr. chairman, for having this hearing today. today we have another opportunity here from the leaders of facebook, twitter, and google. and one has become a concerning pattern. the members of this committee are here to demand answers to questions about social media's role in escalating misinformation, extremism, and violence. last week i testified at a house judiciary committee hearing about the rise in discrimination and violence against asian-americans. that hearing came on the heels of a violent attack in atlanta that left eight people, six of whom were asian women, dead. the issues we are discussing
5:00 am
here are not abstract. they have real world consequences and implications that are too often measured in human lives. i'm worried, as are many watching this hearing, that the companies before us today are not doing enough to prevent the spread of hate, especially when it's targeted against minority communities. clearly the current approach is not working, and i think congress must revisit section 230. a recent study from the university of san francisco examined nearly 700,000 tweets in the week before and after president trump tweeted the phrase "chinese virus." the results show two alarming trends. there was a significantly greater increase in hate speech the week after the president's tweet, and that half of the tweets used in the hashtag china virus showed an anti-asian sentiment compared to just one-fifth of the tweets using the hashtag covid-19. this evidence backs up what the
5:01 am
world health organization already knew in 2015, saying disease names really do matter, we've seen certain disease names provoke a backlash against members of particularly religious or ethnic communities. despite this, facebook and twitter are still allowing hashtags like china virus, kung flu, and wuhan virus to spread. mr. zuckerberg, and mr. dorsey, given the clear association between this type of language and racism and violence, why do you still allow these hashtags on your platforms? anyone want to answer that, or is that not answerable? >> i think we are waiting for you to call on one of us. we do have policies against hateful conduct, and that includes trends. so when we see associated with any hateful conduct, we will take action on it. it's useful to remember that a lot of these hashtags do contain
5:02 am
counterspeech. and people on the other side of it do own them and show why this is so terrible and why it needs to stop. >> can i just take my time back? the fact of the matter is -- algorithms to kind of get rid of these things. mr. zuckerberg, any comment here? >> thanks, congress woman. the rise in anti-asian hate is a really big issue and something that i do think we need to be proactive about. i agree with the comments that jack made on this. on facebook, any of that context, if it's combined with something that's clearly hateful, we will take that down, it violates the hate speech policy. but one of the nuances that jack highlighted that we certainly see as well in enforcing hate speech policies is that we need to be clear about when someone is saying something because they're using it in a hateful way versus when they're denouncing it. and this is one of the things
5:03 am
that has made it more difficult to -- >> an opportunity to really look at hate speech and what it really means, particularly in this day and age when we have many instances of these things happening. and hate speech on social media can be baked down. and unfortunately this also is a trend that maybe happened years and years ago, which it might've just been a latent situation. but social media, it travels all around the world and hurts a lot of people. and my feeling, and i believe a lot of other people's feeling is that we really have to look at how we define hate speech. you all are very brilliant people, and you hire brilliant people. i would think that there is a way for you to examine this further and take it one step lower to see if it is something that is legitimate or not. and i really feel that this is a
5:04 am
time especially now in examining platforms and what you can do and should do. and as we're examining here in this committee and as we write legislation, we really want to have the entire multitude of what can and can't be done. so, with that, mr. chairman, i only have 11 seconds left, and i yield back. thank you. >> thank you. gentlelady yields back. let's see. the chair now recognizes mr. kinzinger for five minutes. >> thank you all for being here. in all this conversation, it's good to have. i think we also have to recognize that we need -- we're lucky to have all these companies located in the united states when we talked about the issues and concerns, for instance, with tiktok, we can see that a lot of these companies could easily leave here and go elsewhere, and then we would have far less oversight. i think the crackdown on january 6th was correct. i think we need to be careful and not use that as a way to
5:05 am
deflect from what led to january 6th, pushing of this narrative of stop the steal. i think there are folks that are concerned though that we also need to make sure that those same levels of protection exist when you talk about like iran, for instance, and what the leaders there tweet. but let me go into specific questions. over the years we've obviously seen the rise of disinformation. it's not new. i remember getting disinformation in the '90s. but we've seen it spread on these platforms. we live in a digital world where many people get their news and their entertainment from the internet, from articles and posts that are often based off algorithms that can cater to what people see and read. so those constant news feeds simply reinforce people's beliefs or worse that they can promote disgraceful and utterly ridiculous conspiracy theories from groups like qanon. extremism and violence have grown exponentially as a result. we know it's specifically true after january 6th. mr. zuckerberg, numerous
5:06 am
external studies and some of your own internal studies have revealed that your algorithms are actively promoting divisive, hateful and conspiratorial content. do you think those studies are wrong? and if not, what are you guys doing to reverse course in that? >> sure. thank you, congressman. this is an important set of topics. in terms of groups, we stopped recommending all civic and political groups even though i think a lot of the civic and political groups are healthy because we were seeing that that was one vector that there might be polarization or extremism, and groups might start off with one in a set of views but migrate to another place. so we removed that completely. and we did it first as an exceptional measure during the election. since the election we've announced we're going to extend that policy indefinitely. for the rest of the content and news feed and on instagram, the
5:07 am
main thing that i'd say is i do think that there is quite a bit of misperception about how our algorithms work and what we optimize for. i've heard a lot of people say we're optimizing for keeping people on the service. the way that we view this is we are trying to help people have meaningful social interactions. people come to social networks to be able to connect with people. if we deliver that value, then it will be natural that people use our services more. but that's very different from setting up algorithms in order to just kind of try to tweak and optimize and get people to spend every last minute on our service, which is not how we design the company or the services. >> thanks. i don't mean to interrupt you. i do have another question. mr. chairman, i want to ask unanimous consent to insert into the record and i recall from the "wall street journal" titled "facebook executives shut down efforts to make the site less divisive." for years i've called for increased consumer protection from companies on fake accounts and bad actors who use them to exploit others.
5:08 am
in 2015 a woman from india spent all of her money on a flight to come see me because she claimed to have developed a relationship with me over facebook. in 2019 i sent you mr. zuckerberg a letter, provided a relatively inadequate response. since then i've introduced this legislation, account verification act, both of which aim to curb this activity. and, mr. zuckerberg, the last time you came before us, you stated that facebook has a responsibility to protect its users. do you feel that your company is living up to that and, further, what have you done to remove those fake accounts? >> thanks. so, fake accounts are one of the bigger integrity issues that we face. i think in the first half of -- well, in the last half of last year, i think we took down more than a billion fake accounts just to give you a sense of the volume. although most of those our
5:09 am
systems are able to identify within seconds or minutes of them signing up because the accounts just don't behave in a way that a normal person would in using the service. but this is certainly one of the highest priority issues we have. we see a large prevalence of it. our systems i think at this point are pretty effective in fighting it. but there are a still percent that get through and it's a big issue and one that we'll continue working on. >> thank. i'd love to ask the others a question but i don't have time. so i yield back, mr. chairman. thank you for your attention. >> i thank the gentleman. the chair now recognizes -- [ inaudible] >> part of the reason for this toxic stew is that you employ manipulative methods to keep people cemented to the platform.
5:10 am
often amplifying discord. and it boosts your bottom line. you enjoy an outdated liability shield that incentivizes you to look the other way or take half measures while you make billions at the expense of our kids, our health, the truth. and now we've seen the very foundation of our democracy. i've been working for over a year with advocates and other members on an update to the children's protections online. you all know that tracking and manipulation of children under age 13 is against the law. but facebook, google, youtube, and other platforms have broken that law or have found ways around it. many have been sanctioned for knowingly and illegally harvesting personal information of children and profiting from it. i have a question for each of you. it's a quick yes or no.
5:11 am
did you all watch the social dilemma where former employees of yours or other big tech platforms say they do not allow their kids on social media? mr. zuckerberg? >> congresswoman, i haven't seen it, but i'm obviously familiar with it. >> okay. mr. pichai, yes or no? >> yes, i've seen the movie. >> and? >> no. >> okay. well, mr. zuckerberg, there is a good reason that they have -- the former execs say that. are you aware of the 2019 "journal of the american medical association" pediatrics study that the risk of depression for adolescents rises each day of hour on social media spent. i'm not talking about face time or sending text messages to friends. but are you aware of that research? >> congresswoman, i'm not aware
5:12 am
of that research. >> all right. what about the 2019 hhs research that suicide rates among kids age 10 to 14 increased by 56% between 2007 and 2017 and tripled for kids between the age of 10 and 14? yes or no? >> congresswoman, i'm aware of the -- >> so yes? certainly you are also aware of the research that indicates a correlation between the rise in hospital admissions for self-harm and the prevalence of social media on clones in the apps on platforms that are designed to be addictive and keep kids hooked. yes? well, how about you, mr. pichai? are you aware of the jama pediatrics september 2020 study where they tested hundreds of apps used by children age 5 and under, many of which are in the
5:13 am
google play store's family section. the study found 57% of the apps tested showed transmission of identifying info to third parties in violation of the -- law? are you familiar? >> extensively spent time, we introduced a curated set of apps for kids on the play store. we give digital well-being tools so that people can take a break, set time limits for children. >> let me ask you this then, mr. pichai. how much are you making and advertising revenue from children under the age 13? >> uh, most of our products other than a specific product designed for kids and youtube, most of our products are not eligible for children under the age of 13. >> so you're not going to provide that. mr. zuckerberg, how much advertising revenue does facebook, do you make, from behavioral surveillance advertising targeted towards kids under age 13?
5:14 am
>> congresswoman, it should be none of it. we don't allow children under the age of 13 on services that run advertising. >> are you saying that there are no kids on instagram under the age of 13 right now? >> congresswoman, children under the age of 13 are not allowed on instagram. >> well, that's not an answer. of course every parent knows that their kid that's under the age of 13 is on instagram. and you know that the brain and social development of our kids is still evolving at a young age. there are reasons in the law that we set the cutoff at 13. but because these platforms have ignored it, they've profited off of it, we're going to strengthen the law. and i encourage all of my colleagues to join in this effort. i've heard a lot of bipartisan support here today. we also need to hold the corporate executives accountable and give parents the tools that they need to take care and protect their kids. thank you. >> gentlelady, your time is expired.
5:15 am
chair recognizes mr. johnson for five minutes. >> thanks, mr. chairman. you know, over a decade ago, americans watched facebook, twitter, and google emerge from humble beginnings. we were curious to see how these new innovative companies would improve our lives. the results are in, and they're deeply concerning. we've seen a surge in cyberbullying, child porn, radical extremism, human trafficking, suicides, and screen addiction, all of which have been linked to the use of social media. our nation's political discourse has never been uglier, and we haven't been this divided since the civil war. yet, big tech marches on uninhibited. what's their newest target? children under the age of 13. news outlets this week have reported that facebook is planning to create an instagram app designed for children under the age of 13. we've talked about it here already today. elementary and middle school
5:16 am
students by allowing big tech to operate under section 230 as is we'll be allowing these companies to get our children hooked on their destructive products for their own profit. big tech is essentially handing our children a lit cigarette and hoping they stay addicted for life. you know, in 1994, democratic congressman henry waxman chaired a hearing with the ceos of our nation's largest tobacco companies. during his opening statement he stated, and i quote, sadly, this deadly habit begins with our kids. in many cases they become hooked quickly and develop a lifelong addiction that is nearly impossible to break. so, mr. zuckerberg and mr. dorsey, you profit from your company's hooking users to your platforms by capitalizing on their time. so, yes or no, do you agree that
5:17 am
you make money off of creating an addiction to your platforms, mr. zuckerberg? >> congressman, no, i don't agree with that. >> thank you. that's what i needed, a yes or a no. because you do. mr. dorsey? >> no. >> okay. all right. let me go on. chairman waxman went on to say, and i quote, for decades, the tobacco companies have been exempt from the standards of responsibility and accountability that applied to all other american corporations, companies that sell aspirin, cars, and soda are all held to strict standards when they cause harm. and that we demand that when problems occur, corporations and their senior executives be accountable to congress and the public. this hearing marks the beginning of a new relationship between
5:18 am
congress and the tobacco companies. that's what chairman waxman said in 1994. so, for all three of you, mr. zuckerberg, mr. dorsey, and mr. pichai, do you agree that the ceos that as the ceos of major tech companies, you should be held accountable to congress and the public? mr. zuckerberg? >> congressman, i think we are accountable to congress and to the public. >> do you think you should be held accountable? >> i'm not sure i understand what you mean. >> it's an easy question. should you be held accountable to congress and the public? >> yes. >> for the way you run your business? >> yes, and we are. >> all right. thank you. mr. dorsey? >> yes, accountable to the public. >> okay. i said accountable to congress and the public. we represent the public. so do you agree?
5:19 am
>> yes. >> okay. thank you. mr. pichai? >> yes, i'm here today because i'm accountable to congress and members of the public. >> okay. great. well, gentlemen, let me tell you this. and i think i've heard it mentioned by several of my other colleagues. there's a lot of smugness among you. there is this air of untouchableness in your responses to many of the tough questions that you're being asked. so let me tell you all this. all of these concerns that chairman waxman stated in 1994 about big tobacco apply to my concerns about big tech today about your companies. it is now public knowledge that former facebook executives have admitted that they used the tobacco industry's playbook for addictive products. and while this is not your first hearing in front of congress, i can assure you that this hearing marks a new relationship between
5:20 am
all of us here today. there will be accountability. mr. chairman, i yield back. >> thank you. the chair now recognizes mr. mcnerney for five minutes. >> i thank the chair for organizing this hearing, and i thank the participants. this is a lot of work on your behalf and a long day for you. i appreciate that. are you all aware that your platforms are behemoths and that the americans are demanding that we step in and reign in your platforms both in terms of how you handle our data and how your platforms handle disinformation that causes real harm to americans and to the democracy itself? i understand the tension you have between maximizing your profits by engaging to your platforms on the one hand, and the need to address disinformation and real hard it causes on the other hand. your unwillingness to unambiguously commit to enforcing your own policies and
5:21 am
removing the 12 most egregious spreaders of vaccine disinformation from your platforms gets right at what i'm concerned about, disinformation is a strong driver for engagement and consequently you too often don't act even though we know you have the resources to do that. there are real harms associated with this. my questions i hope i don't appear to be rude. but when i ask for a yes-or-no question, i will insist on a yes-or-no answer. mr. zuckerberg, yes or no. do you acknowledge that there is disinformation being spread on your platform? >> sorry, i was muted. yes, there is, and we take steps to fight it. >> thank you. yes or no, do you agree that your company has profited from the spread of disinformation? >> congressman, i don't agree with that. people don't want to see disinformation on our services, and when we do -- >> so no? you said you don't agree with
5:22 am
that, i appreciate your forthrightness on that. but we all know this is happening. profits are being generated from covid-19 and vaccine disinformation, election disinformation, qanon conspiracy theories, just to name a few things, and it's baffling that you have a negative answer to that question. proximately -- well, let's move onto the next issue. mr. zuckerberg, you talk about relying on third-party fact-checkers to combat the spread of disinformation. but you tell us very little about the process. i wrote you a letter nearly two years ago asking about it, and you failed to answer my question. i ask this question again. when an executive from your company testified last year and she failed to answer it, i'd like to get an answer today. on average, from the time compton posted the facebook's platform, how long does it take facebook to flag suspicious content to third-party fact checkers to review the content
5:23 am
and for facebook to take remedial action after this review is completed? how long does this entire process take? i'm just looking for a quick number. >> congressman, it can vary. if an ai system identifies something, it can be within seconds. if we have to wait for people to report it to us and have human review, it can take hours or days. the fact checkers take as much time to review things. but as soon as we get an answer back from them, we should operationalize that and attach it -- content as false. >> i understand what you're saying. but i do know that this process isn't happening quickly enough, and i'm very concerned that you aren't motivated to speed things up. because most problematic content is what gets the most views. and the longer the content stays up, the more this helps maximize your bottom line and the more harm that it can cause. it's clear that you are going to make these changes on your own. this is a question for all of the participants, panelists.
5:24 am
would you oppose legislation that prohibits placing ads next to what you know to be or should know to be false or misleading information including ads that are placed in videos, promoted content, and ads that are placed above, below, or on the side of a piece of content? mr. zuckerberg, would you answer with a yes or no first, please? >> congressman, that's very nuanced. i think the questions to determine whether something is misinformation is a process that i think would need to be spelled out well in a law like that. >> well, okay. i appreciate that. mr. dorsey? >> yes, i would -- until we see the actual requirements or what the ramifications are. >> mr. pichai, would you oppose an addition like this? >> the principle makes sense. in fact, advertisers don't want
5:25 am
content like that. we already have incentives. you can imagine reputable advertisers do not want any ads to appear next to information that could turn off their consumers. so we have natural incentives to do the right thing here. >> you say it's not in your company's interest to have disinformation on your platform so you shouldn't oppose efforts that would prevent harming american people. i yield back. >> the gentleman's time is expired. gentleman yields back. chair now recognizes mr. long for five minutes. >> thank you, mr. chairman. mr. pichai, i'm going to ask you a yes-or-no question. and just tell me if you know the difference from these two words, yes and no? >> yes. >> mr. zuckerberg, same question for you. do you know the difference of yes and no? >> yes, congressman. >> and mr. dorsey, same question for you.
5:26 am
do you know the difference in two words, yes or no? >> yes. >> i'm sorry? >> yes, i know the difference. >> okay. thank you. one of my colleagues didn't think i could get you all to answer. mr. zuckerberg, let me ask you. how do you ascertain if a user is under 13 years old? >> congressman, on services like facebook, we have people put in a birthday when they register. >> that's handy. so a 13-year-old would never -- i mean, an 11-year-old would never put in the wrong birth date and say they were 13. >> congressman, it's more nuanced than that. but i think you're getting at a real point, which is that people lie. and we have additional systems that try to determine what someone's age might be. so if we detect that someone
5:27 am
might be under the age of 13 even if they lied we kick them off. but this is part of the reason why we're exploring having a service for instagram that allows under 13s on. because we worry that kids may find ways to try to lie and evade some of our systems. but if we create a safe system that has appropriate parent controls then we might be able to get people into using that instead. we're still early in figuring this out, but that's a big part of the theory and what we're hoping to do here. >> and currently they're not allowed to use instagram, that's correct? >> that's correct. our policies do not allow people from under the age of 13 to use it. >> i'm from missouri, the show me state, and just to say that no one under 13 could get on doesn't pass the missouri smell test. i was thinking of you, mr. zuckerberg. you created the oversight board to help hold facebook
5:28 am
accountable. they are currently looking at facebook's decision to remove president trump's facebook account. if the oversight board determines that the facebook should've left president trump's account up, what will you do? >> we will respect the decision of the oversight board. and if they tell us that former president trump's account should be reinstated, then we will honor that. >> i don't know why people call attorney general ashcroft attorney general when they speak of president trump they call him former president. but i guess i'll leave that for another day. speaking with you again, mr. zuckerberg. it's my understanding that facebook oversight board is comprised of members from all over the world. you're well aware the united states has the strictest protections on free speech than any other country. since the decisions of the board are being made by a panel rather than the u.s. court of law, how can you assure members of this committee and the american people that the oversight board
5:29 am
will uphold free speech and make their decisions based on american laws and principles? >> congressman, the members of the oversight board were selected because of their views on free expression and strong support of it. that's why we created the oversight board to help us defend these principles and to help us balance the different aspects of human rights including free expression. i think the decisions of the oversight board has made so far reflect that. >> okay. let me move on to mr. dorsey. mr. dorsey, i know you're from the show me state also. have you been vaccinated against covid-19? >> not yet. >> mr. pichai, have you been vaccinated against covid-19? >> sorry. i missed the question,
5:30 am
congressman. >> i know, i bore a lot of people. have you been vaccinated against covid-19? >> congressman, i was very fortunate to have received it last week. >> so you have one shot, you have another one to go? or is it johnson & johnson where you just need one? >> i still have one more shot to go. >> and, mr. zuckerberg, same question. have you been vaccinated against covid-19? >> i have not yet but hope to as soon as possible. >> okay. that's not a personal preference not to get vaccinated, you just haven't got to your age group? >> that's correct. >> okay. thank you. i just cannot believe robert kennedy jr. is out there with his anti-vax stuff, and it's allowed to stay up on twitter. with that i yield back. >> gentlemen yields back. let's see who's next.
5:31 am
i don't see a name. can staff show us who's next? ah, mr. welsh, you're recognized for five minutes. >> thank you, mr. chairman. what we're hearing from both sides of the aisle are enormous concerns about some of the consequences of the development of social media. the algorithmic amplification of disinformation, election interference, privacy issues. the destruction of local news, and also some competition issues. and i have listened carefully. and each of the executives has said that your companies are attempting to face these issues. but a concern i have is whether when the public interest is so affected by these decisions and by these developments, ultimately should these decisions be made by private executives who are accountable to shareholders, or should they be made by elected
5:32 am
representatives accountable to voters? so, i really have two questions that i'd like each of you starting with mr. zucker and then mr. pichai and then mr. dorsey to address. but first, do you agree that many of these decisions that are about matters that so profoundly affect the public interest, should they be made exclusively by private actors like yourselves who have responsibilities for these major enterprises? and, secondly, as a way forward to help us resolve these issues or work with them, will you support the creation by congress of a public agency, one like the federal trade commission or the securities and exchange commission, staff, expert, policy and technology has rulemaking and enforcement authority to be an ongoing representative of the public to address these emerging issues? mr. zuckerberg? >> congressman, i agree with
5:33 am
what you're saying, and i've said a number of times that i think that private companies shouldn't be making so many decisions alone that have to balance these complicated social and public equities. and i think that the solution that you're talking about could be very effective and positive for helping out. because what we've seen in different companies around the world is there are lots of different public equities at stake here, free expression, safety, privacy, competition. and these things trade off against each other. i think a lot of these questions and the reason why people get upset at the companies. i don't think it's necessarily because the companies are negligent. i think it's because these are complex tradeoffs between these different equities. >> pardon my interruption, but i want to go to mr. pichai. but thank you, mr. zuckerberg. >> congressman, if your question is -- i just want to make sure, are you asking about whether
5:34 am
they should be under the agency. we are definitely subject to a variety of statutes and oversight by agencies like ftc. we have a constant agreements with ftc and we engage with these agencies regularly. >> do you believe that it should be up to the public as opposed to private interest to be making decisions about these public effects? >> we definitely think areas where there could be clear legislation informed by the public. i think that definitely is a fair approach. i would say the nature of content is so fast changing and so dynamite. you know, we spend a lot of area hiring experts with third parties and that expertise is needed i think. >> right. and that's the problem we have in congress. because an issue pops up and there's no way we can keep up with -- you all can barely keep up with it yourself. mr. dorsey, your view on those two questions, please.
5:35 am
>> i don't think the decision should be made by private companies or the government, which is why we're suggesting a protocol approach to help the people make the decisions themselves. >> so, does that mean that the creation of an agency that would be intended to address many of these tech issues that are emerging is something you would oppose or not? >> well, i always have an open mind. we'd want to see the details of what that means and how it works in practice. >> well, of course. but the heart of it is creating an entity that has to address these questions of algorithmic transparency, of algorithmic amplification, of hate speech, of disinformation, of competition, and to have an agency that's dedicated to that, much like the securities and exchange committee was designed to stop the abuse of wall street in the '30s.
5:36 am
>> i do think there should be more regulation around the primitives of ai, but we focus a lot of our conversations right now on the outcomes of it. i don't think we're looking enough at the primitives. >> thank you. i yield back. >> gentleman yields back. chair recognizes mr. bashon for five minutes. >> thank you, mr. chairman. first of all, i want to thank the witnesses for being here today. it's going to be a long day, and i appreciate your testimony and your answering questions. i do think it's important to understand history, excuse me, when you look at these situations. when it comes to the political side when thomas jefferson wanted to get out an anti-adams message even though he was his own vice president, he started his own newspaper because it was pretty clear that the newspapers that were being published weren't going to change their view because there was no competitive reason to do that. and i think we're looking at
5:37 am
potentially a similar situation here. without competition things don't change. it'd be interesting to know the conversations with john d. rockefeller in the early 1900s prior to the breakup of standard oil in 1911 and then of course at&t in 1982. i understand that these are businesses, they're publicly held companies. i respect that. i understand that. i'm a capitalist. that said, these situations are a little different i think because there's some social responsibility here, and i appreciate your answers that your companies are doing what you believe are necessary. so i'm going to take the anti-trust angle here. mr. pichai, what do you think -- what's the situation when you have google 92% of the searches are google. you basically can't get on the internet without some sort of google service. what do you think is going to happen? what do you think we should do
5:38 am
about that? >> congressman, we definitely are engaged in conversations as well as lawsuits in certain cases. we understand the scrutiny here. we are a popular search engine. but we compete vigorously in many of the markets we operate in. for example, majority of our revenue comes from product searches. we definitely see a lot of competition by category. there are many areas as a company we are an emerging player. be it making phones or trying to provide enterprise software, we compete with other larger players as well. if you look at last year and look at all the new entrants in the market, new companies that emerged strongly. in tech shows that the market is vibrant and dynamic. as google we have invested in many startups. google has started -- a form of
5:39 am
google employees has started over 2,000 companies in the past 15 years. and so i see a highly dynamic, vibrant tech sector. and we are committed to doing our part. >> okay. fair enough. mr. zuckerberg, do you have some comments on that subject? >> congressman, i would echo sundar's comments. this is a highly competitive market. if this is a meeting about social media. not only do you have the different companies that are here today that all offer very big services that compete with each other. but you have new entrants that are growing very quickly like tiktok, which is reaching a scale of hundreds of millions or billions of people around the world. and i think it's growing faster than any of our services of companies that are up here today and certainly competitive with us. that's just naming a few. obviously there's snapchat and a bunch of other services as well. it's a very competitive
5:40 am
marketplace. >> and do you think -- i'll ask you this, mr. zuckerberg. i think you've commented that some of the privacy things that maybe the europeans did would kind of solidify your dominance as a company. so what should we do in the united states on this? it's a different subject but similar to not do something that would stymie innovation, competition. and, in my view, further create a month op listic or at least a perceived month op listic environment. >> congressman, i do think that the u.s. should have federal privacy legislation because i think we need a national standard. and i think having a standard that is across the country that's as harmonized with standards in other places would actually create clearer expectations of industry and make it better for everyone. but i think the point that you're making is a really important one, which is if we ask companies to lock down data, then that, to some degree, can
5:41 am
be at odds with asking them to open up data to enable whether it's academic research or competition. so i think that when we're writing this privacy regulation, we just should be aware of the interaction between our principles on privacy and our principles on competition. and that's why i think a more holistic view like what congressman welsh was just proposing is perhaps a good way to go about this. >> okay. quickly, mr. dorsey, do you have any comments on that? >> one of the reasons we're suggesting more of a protocol approach is to enable as many new entrants as possible. we want to be a client on that. -- >> gentleman's time is expired. with that i'll yield back. the chair recognizes ms. clark for five minutes. >> thank you, mr. chairman. i thank you, the chairs and the ranking members for today's hearing. i also thank our witnesses for appearing. in january i called for public comment to the discussion draft
5:42 am
of my bill. the civil rights modernization act of 2021. a narrowly focused proposal to protect historically marginalized communities from the harms of targeted advertising practices. these harms can and have infringed on the civil rights of protected classes. and i'm proud to formerly introduce this bill next week to diminish inequities in the digest theal world. i ask our witnesses to please answer the questions as succinctly as possible for time's sake. the first question goes to mr. zuckerberg. facebook currently provides their advertisers with insight on how to get their ads in front of people who are most likely to find their ads relevant by utilizing tools to use criteria like consumers' personal interest, geography, to fine tune the targeting. this is often used code that target or avoids specific races or other protected classes of
5:43 am
theme. i'm aware of the updates to your special ad audience. however, why does facebook continue to allow for discrimination in the placement of advertisements that can violate civil rights laws? >> we've taken a number of steps to eliminate ways that people can target different groups based on racial affinity and different ways that they might discriminate. because this is a very important area. and we have active conversations going on with civil rights experts as to the best ways to continue improving these systems, and we'll continue doing that. >> mr. dorsey, twitter allows advertisers to use demographic targeting to reach people based on location, language, age and gender. in july your company made changes to your ad targeting policy to advise advertisers to, quote, not wrongfully disdiscriminate against locally
5:44 am
protected categories of users. what did twitter mean by the phrase wrongfully discriminate? are some kind of discriminating advertising allowed on twitter? if now, would you please explain? >> no, not at all. >> i'm sorry, i didn't get that answer. >> no, none at all. >> okay. and, so, can you explain what you meant by wrongfully discriminate? >> we mean that you shouldn't just use our app to discriminate. >> oh, okay. mr. pichai, google has recently announced a new approach in their targeting system called flock or federal learning of cohorts, excuse me, federated learning of cohorts to allow ad targeting to groups of people with similar characteristics. the new system will utilize machine learning to create these cohorts for the consumers'
5:45 am
visits to websites. given the potentially biased and disparity impact of machine learning algorithms how has google addressed the potential discriminatory impact of this new floc system? >> congresswoman, it's an important area. we recently announced a joint collaboration with hud to ban ads that would target a gender, family, zip code. so we'll bring similar -- particular when we are using machine learning. floc we will be publishing more technical proposals on it. and they will be held to our ai principles, which prohibit, you know, discrimination based on sensitive categories including race. and we will be happy to consult and explain our work there. >> i appreciate that. gentlemen, i just want you to be aware that the longer we delay in this, the more that these
5:46 am
systems that you have created bake discrimination into these algorithms. i think that is critical that you get in there and that you do what is in the best interest of the public of the united states of america and undo a lot of the harm that has been created with the bias that has been baked into your system. with that, mr. chairman, i yield back 23 seconds. and thank you for this opportunity. >> and i thank the gentlelady for that. the chair now recognizes mr. wal berg for five minutes. >> thank you, mr. chair. and thanks to the panel for being here. what i've listened to so far today, i'd have to say that based upon what many of us in congress say about the best legislation when both sides don't like it, it's probably good, and you've certainly hit that today. i think from both sides you have
5:47 am
been attacked for various reasons. but i have to say the platforms that you've developed are amazing. and they have huge potential. and they indeed have enabled us to go directions, information, communications, relationships that can be very positive and are amazing in what's been accomplished. i think we get down to how that is controlled and who controls it, going back to our foundations as our country, it was our second president john adams who said that our constitution was meant for a moral and religious people. i think we're seeing a lot of problems that you're frustrated with as a result of parents and families, churches, schools that aren't taking the primary responsibility. i get that. so it comes down to the choice that's left for the people is
5:48 am
really between conscience and the constable. we're either going to have a conscience that self-controls. and what you said, mr. zuckerberg, i wouldn't mind my 3 and 5-year-old granddaughters coming to your house. i'm not asking for the invitation, but i think they'd be safe there relative to the online capabilities from what you said. but that's conscience versus constable. but what i've heard today is that there will be some constable. and i'm not sure that we'll have success in moving forward. i guess, mr. chairman, unfortunately we've been here before. we've been here many times. a few years ago when mr. zuckerberg was here before this committee, i held up a facebook post by a state senator in michigan whose post was simply announcing his candidacy as a republican for elected office. and yet it was
5:49 am
well, hiding behind the section 230, all of you have denied there's any bias or inequitable handling of content on your platform and yet pew research center found that, and this is where i have the problem, not so much with the platform or the extent of what is on the platform, but they found 72% of the public actively sensor political views that big tech companies find objectionable. further and i quote by a 4/1 margin, respondents were more likely to say big tech supports were used of liberals over conservatives, probably equalled
5:50 am
only by higher education and yet every time this happens, we fall back on any glitches in the algorithms. former google insider who said, before he was suspended by google, he said algorithms don't right themselves. we right them to do what we want them to do. that's my concern. whether it's censuring pro-life groups like live action or pro-second amendment groups like the well-armed women. platforms continually shutdown law-abiding citizens and constitutional discussions and commerce that don't align with big tech views and the world view and this includes the first and second amendments that causes me to be concerned that you don't share the same freedom and concerns. he said if you're asking if i
5:51 am
feel particularly comfortable he should not express his views on twitter? i don't feel comfort about that. because yesterday was donald trump blamed and tomorrow it could be somebody else. mr. zuckerberg or dorsey, do you think the law should allow you to be arbiters of truth as they have under section 2 [ inaudible ] mr. zuckerberg, first. >> i think it is good to have a law that allows platforms to moderate content. but i -- as i said today, i think we benefit from more transparency and accountability. >> i don't think we should be the arbitrators of truth and i don't think the government should be either. >> expired. chair now recognizes mr. carbons for five minutes.
5:52 am
and recommended for having the important hearing and suffers the national hispanic media and information on social media. if we could submit for the record. my first question is to you, mr. zuckerberg. facebook brought in approximately $86 billion of revenue in 2020. is that about right? give or take. >> i think that's right. >> how much of that revenue did facebook invest in providing misinformation in that portion of your business? >> i don't know the exact answer but we invest billions of dollars in our integrity programs, including having more than a thousand engineers working on this and 35,000 people doing content review across the company.
5:53 am
>> how many full-time. >> around 60,000. >> so, you're saying over half the people in the company are doing portion of content review, etc., which is the main subject we seem to be talking about today? >> no. because you asked about full-time employees and some of the content reviewers are contractors. >> all right. there seems to be a disparity between the different languages used on the platform in america. for example, there was study published in april and over 100 items of misinformation on facebook in six different languages found 70% of the spanish language content analyzed had not been labelled by facebook, as compared to the english not been labelled. what kind of investment is
5:54 am
facebook making on the different languages to make sure that we have more of an accuracy of flagging the disinformation and misinformation? >> congressman, thanks. we have an international fact-checking program, where we work with fact checkers in more than 80 countries in a bunch of different languages and the u.s. specifically, we have spanish-speaking fact checkers, as well as english-speaking fact checkers. that's on the misinformation side. when we create resources, whether it's around covid information or election information, we translate those hubs so they can be available in english and spanish. and people can see the content in whatever language they prefer. >> so, basically, you're saying this is expensive? >> it's something we continue to invest more in.
5:55 am
>> i like the last portion. i do believe and would love to see you invest more. my 70-plus year old mother in law commented to me the other day that her friends, who communicate namely in spanish and do use the internet, some of your platforms. they were worried about the vaccine and somebody is going to put a chip in their arm. for god sakes, that to me is unbelievable they would comment on that. but they got most of that information on various platforms. clearly spanish language disinformation is an issue and i'd like to make sure we see all of your platforms, address the issues, not only in english but all languages. i think it's important to understand a lot of hate is being spewed on the internet. there are 23 people dead in el paso because somebody filled this person's head with a lot of
5:56 am
hateful nonsense and he drove to specifically kill mexicans along the texas/mexican boarder. eight people are dead in atlanta because an anti-asian hatred and misinformation has been permitted to spread and allowed on these platforms unchecked, pretty much unchecked. the spreading of hatred is a deadly problem in america and we need see it stops. do you believe you've done enough to combat these types of issues? >> i believe our system, that we've done more than any other company. there's still a problem and more that needs to be done. >> good. you'd like to do more. thank you. i want to ask this question to all three of you. do you think each one of your organizations should have an executive level individual in charge, reporting directly to
5:57 am
the ceo? do you agree that should be the case? >> congressman, we have an executive-level person who's in charge of the integrity team we talked about and he's on my management team. >> reports directly to you? >> he does not. a lot of people on the management team report to them. >> to the other two witnesses very quickly. >> congressman, we've seen someone who reports directly to me, who overseas across all these areas. >> thank you. >> thank you so much. >> comment's observation pired. the chair now recognizes mr. carter for five minutes. >> thank you, mr. chairman. i want to ask, mr. zuckerberg, if you're aware, as all of us are, about the disaster at the southern boarder to indicate human smugglers who have been using social media, including
5:58 am
facebook, whatsapp and instagram to coordinate operations in transporting illegal immigrants into the united states. things like what to say to authorities, transportation tips and other information traded on your platform to evade authorities and contribute to the crisis, the disaster at the boarder. mr. zuckerberg, do you feel complicit in any way that your platform is assisting in this disaster? >> first, let me say -- >> you know what's happening at the boarder. i'm asking specifically about your platform. do you feel complicit? >> we're -- we have policies and we're working to fight this content with policies against scams and pages, groups and events, like the content you're talking about. we're also seeing the state department use information and
5:59 am
people. >> i'm talking about coyotes using your platform to spread this misinformation to assist in this illegal activity, that is resulting in horrible conditions for the people who are trying to can come across the boarder. >> congressman, that's against our policies and again, let me to say the situation at the boarder is serious and we're taking it seriously. >> and i hope you look into this. these reports that your platform is being used by these traffickers. this is something we need your help with. i hope you feel a sense of responsibility, sir, to help us with this because we certainly need it. you've dedicated a lot of your written testimony to election issues and even today, at this hearing, you've been public in pushing back about the election claims in november. yet, facebook has been
6:00 am
essentially silent on the attempted theft of the certified election in iowa of representative nora meeks. why is that? why are you silent on that, yet not other elections? >> i think what we saw, leading up to january 6th, was unprecedented in american history, where you had a sitting president trying to undermine the peaceful transfer of power. >> can you determine which is important and which is not. certified representative. this is the most important thing to them as well. >> congressman, i think part of what made the january 6th events extraordinary, was not just the election was contested -- >> let me ask you this. what is it that makes this particular issue irrelevant, that you're not covering it >> i didn't say irrelevant. but january 6th we had insurrectionests storm the
6:01 am
capitol, leading to the death -- >> i'm aware of that. i was there. i understand what happened. but again, will you commit to treating this as a serious election concern? >> congressman, i will commit to that and we apply our policies to all situations. and i think this is different from what happened january 6th but we apply our policies equally in these cases. >> mr. dorsey, you too, have been very silent on this issue on your platform. would you commit to treating this as a serious concern, the attempted theft of the certified seat in iowa. >> yes, we're looking for all opportunities to minimize anything that takes away from the integrity of elections. >> mr. dorsey, while i've got you. let me ask you. you started a new program,
6:02 am
called the bird watch, and it allows people to identify information in tweets that they believe is misleading and the right words to provide context in an effort to stop misleading information from spreading. have you seen -- we've seen mobs of twitter use this, even when the information they share is accurate. why do you think bird watch is going to work, given the culture you've created on your platform? >> well, it's an experiment and we wanted to experiment with more crowd source approach than us going around and doing all this work. >> don't you think that's a dangerous experiment when you're taking [ inaudible ] information? >> no, it's an alternative. >> alternative. >> i think we need experiment as much as possible to get to the right answers. >> that's fine as long as you're not the one being experimented on. as long as you're -- >> that was time is expired.


1 Favorite

info Stream Only

Uploaded by TV Archive on